id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
3d0cc88a48de53ba28a103220ed2b407e4d793b9
|
One of the major endeavours of modern programming languages research is in formalizing our understanding of how language constructs behave on their own and in interaction with each other. We are interested in formalizing the meanings of the various elements of the programming language, and ultimately the language itself. This discipline is called formal semantics. In studying formal semantics, our goal is to formulate a model capable of precisely describing the behavior of every program in a given language. Such a model provides us tools to prove program correctness, program termination, or other critical properties. Furthermore, we can also use such a model to prove certain properties of the language itself, to show equivalence of programs in different programming languages. The knowledge gained could even help build compilers and interpreters to produce more efficient implementations of the language.
In Module 2, we already said that a mathematical model for the programming language itself would provide a succinct and precise representation of the core mechanics and be able to prove certain properties. However, we have not yet given a definition of the behavior of the program in mathematical logic. For example, we described AOE strategy in plain English as “always choosing the leftmost, innermost redex that is not in an abstraction”. A prosaic definition like this will usually not suffice. We would like to (hopefully) have a small set of (usually) syntax-directed rules that describe the elements of a language’s syntax in a formal, mathematical setting.
A semantic model usually comes with a set of observables, which describes the valid outputs of the model. Such outputs could be the produced value by following a number of rules, the set of all types of the language, or simply whether a program returns an error or not. In each case, we would choose an appropriate set of observables, and then build a semantic model to match.
In this course’s formal semantics, we are primarily going to study the operational semantics of various programming languages. Operational semantics is the semantics for specifying how a program executes and possibly how to extract a result from it. More specifically, as our main goal for the course is to understand how programs interacts with data and code in various programming paradigms, we are more concerned at the small-step operational semantics of such paradigms. A small-step operational semantics builds an imaginary “machine” and succinctly describes how this machine might take individual steps, rather than describing the entire computation in one step.
As mentioned, we have already introduced the operational semantics of \(\lambda\)-calculus in Module 2 in an informal way. The goal of this module is to revisit \(\lambda\)-calculus with formal, small-step, operational semantics, demonstrate that it can be used to prove some properties, and show ways of extending it with added primitives.
1 Semantics and Category Theory
We will be describing the reduction steps in our programming languages by formally describing an arrow \((\rightarrow)\) operator, which maps a program state to the “next” program state. This is described within the context of category theory, in which our language is a category, and \(\rightarrow\) is a morphism over that category. You are not expected to have seen these terms before, so we will briefly introduce category theory here. This course will not look deeply into category theory, but, since programming language semantics are described as categories in category theory, knowing some of the language from category theory will help to contextualize formal semantics.
In CS courses, you have undoubtedly seen \textit{sets} and \textit{set theory}. \textit{Group theory} extends set theory by describing groups, which are sets that correspond to, and are described by, certain \textit{axioms} (defined for given groups), and generalizes the language of functions between and within groups. It is from group theory that words such as \textit{isomorphic} and \textit{homomorphic} arise, to describe certain properties of these functions. Category theory abstracts beyond this by describing categories which may not obviously be describable as groups or sets; in particular, one can describe entire mathematical calculi as categories. For instance, one can describe set theory itself as a category, with expressions in set theory as the \textit{objects} described, and the functions being equivalences (or reductions, or expansions, etc) between them; for instance, the resolution of the expression \{1, 2\} ∪ \{3, 4\} to \{1, 2, 3, 4\} is a function (probably a function that more generally describes the resolution of all expressions of the form \(X \cup Y\)). We call these functions between objects \textit{morphisms}.
Where category theory becomes particularly relevant is its abstraction over itself. In the language of category theory, we can describe an entire category—which, recall, can be a mathematical calculi, a language—as an object within the category of categories, and describe a morphism mapping that category to another category. For instance, it is possible to reversibly map the language of sets to the language of predicate logic. By doing so, whole bodies of mathematical literature and proofs can be mapped into other contexts, allowing for a sort of generalization of proofs that was not possible before category theory. Mappings between categories like this are called \textit{functors}, but they’re really just morphisms given a funny name because mathematicians aren’t as accustomed to this kind of abstraction as we are.
\begin{quote}
\textbf{Aside:} We introduced categories as “not obviously be describable as groups or sets”. In fact, since category theory describes how categories can be mapped to other categories, and category theory is itself a category, it is perfectly possible to map any category to \textit{some} kind of set: for instance, we describe the set of all valid program states. Hence “not obviously”, rather than “not”.
\end{quote}
We describe our own languages in terms of a morphism, which maps program states to program states. At this point we will describe program states in purely the same syntax as the language itself, but in future modules, we will add extra syntax for additional state; in either case, the syntax is our calculus. Morphisms are usually shown as arrows, often with text to specify exactly which morphism is being described; in fact, we’ve already seen a few morphisms, such as \(\beta\), but didn’t call them such at the time.
When describing morphisms, we are free to use other categories to do so. For instance, we could say that \((x + y \to z)\) if \((x + y = z)\), and we are now describing our language in terms of the language of arithmetic. It’s important to be clear, in such cases, what language is being described and what language is being used; for instance, in this case, it’s important to realize that the first + was part of the syntax of our language, and the second + was part of the syntax of arithmetic. Usually, this sort of mapping is too narrow to be called a functor—we haven’t actually described a complete rewriting of our language in terms of arithmetic, merely a step within our language—but in some cases, languages are actually described in terms of functors, by describing how to rewrite one language into another language. In fact, that’s a compiler, and proving things about compiler correctness involves proving the functor correct.
We won’t go any more deeply than this into category theory, because we’re not usually proving more broad things about categories. We’re only narrowly interested in proving things about our particular languages. But, you should now have some idea of the formal underpinnings we’re using to describe languages: programming language semantics are not an ad hoc invention, they are described in the language of categories.
\section{2 Semantics and Reality}
But is there anything to guarantee that the semantics we formally model are the same as the semantics implemented in real programming language implementations? The short answer is “usually not”.
There are systems that make formal semantics executable, but the resulting interpreters are usually unusably slow. The purpose of these systems is to have a ground truth for writing test cases. Even that is imperfect, however, since it’s always possible to write formal semantics which are consistent, but not what you intended.
There are also aspects of real implementations which are usually intentionally ignored in formal semantics. For instance, we won’t discuss what happens when the program state is too large to hold in memory. And, in later
modules, we won’t discuss garbage collection, even though it’s crucial to a correct implementation of many systems.
In a much later module on systems programming, we will discuss one counterexample, which successfully uses a formally-defined version of C both as a formal semantics and as a real compiler.
In reality, it’s impossible to prove, in the mathematical sense of the word, anything about how a program will behave on a real system. Aside from the pitfalls mentioned above, no formal system can model “a disgruntled employee took a pickaxe to my server”. Since we’re proving things about abstract calculi, rather than a real implementations, we can actually prove things, with all the rigor of mathematics. But since we’re not proving things about a real implementation, it is the job of the designer of a formal semantics to argue that the semantics correctly reflects the design of the language, and/or of some implementation of the language.
3 Review: Post System
If you are already keen on theoretical computer science (or just still remember the Post system introduced in CS 245), congratulations, you may skip this section. Otherwise, please read along.
The Post system, named after Emil Post, is an example of a deductive formal system, which can be used to reason about programming languages. There are three components to a Post system: a set of signs (which forms the alphabet of the system), a set of variables, and a set of productions. A term is a string of signs and variables, and a production is an expression of the form:
\[ t_1 t_2 t_3 \cdots t_n \]
Where \( t, t_1, \cdots, t_n (n \geq 0) \) are all terms. The \( t_i \) are called the premises of the production, and the \( t \) is the conclusion. Thus, a production with the form:
\[
\begin{array}{c}
\text{premises} \\
\text{conclusion}
\end{array}
\]
is read as “if premises are true, then the conclusion holds”. A production without premises is permitted, and is called an axiom.
Productions are the definitions within our system, so it is outside the scope of the Post system to prove that the productions themselves are correct. In our case, the conclusions are what define how our programming languages are evaluated; in essence, each conclusion is a step we can take, and the premises are the context in which we can take those steps.
Post systems are used to prove conclusions, where a proof is constructed from proofs of its premises. Proofs based on Post systems are constructed using the following rules:
1. An instance of an axiom is a proof of its conclusion;
2. If \( P_1, P_2, \cdots, P_n \) are proofs of \( t_1, t_2, \cdots, t_n \) respectively, and
\[
\begin{array}{c}
\text{t_1 t_2 t_3 \cdots t_n} \\
t
\end{array}
\]
is an instance of a production, then
\[
\begin{array}{c}
P_1 P_2 P_3 \cdots P_n \\
\text{t}
\end{array}
\]
is a proof of \( t \).
Thus, given a final conclusion, a proof of that conclusion can be formed by proving its premises, until no unproven premises remain. The result of such a proof is an upside down tree with the root (final conclusion) at the bottom and the leaves (axioms) at the top.
**Example 1.** As an example of a Post system, we can encode the logical operations of ‘and’, $\land$, and ‘or’, $\lor$, using the following three rules:
\[
\begin{align*}
A & \quad \frac{}{A \lor B} \\
B & \quad \frac{}{A \lor B} \\
A & \quad \frac{}{A \land B}
\end{align*}
\]
Using this small system, it is possible to show that the proof of $(A \lor B) \land (A \lor C)$ follows from a proof of $A$ alone:
\[
\begin{align*}
A & \quad \frac{}{A \lor B} \\
A & \quad \frac{}{A \lor C} \\
\hline
(A \lor B) \land (A \lor C)
\end{align*}
\]
Post systems are used extensively for describing formal semantics. You will see that formal semantics of programming languages, including type systems, are often described in Post systems.
## 4 Operational Semantics for (Vanilla) $\lambda$-calculus
We have already discussed the semantics of $\lambda$-terms in Module 2, when we discussed free and bound variables, substitution, $\alpha$-conversion, and $\beta$-reduction. Assuming that we have already established the notion of binding, substitution, and $\alpha$-conversion, $\beta$-reduction seems to be a suitable candidate for operational semantics, for it specifies a procedure for carrying out computation.
Let’s rewrite $\beta$-reduction as a formal set of rules. First of all, all expressions that have $\beta$-redex in the outermost level can be directly reduced, with no premises:
\[
(\lambda x. M) N \rightarrow_{\beta} M[N/x]
\]
This rule corresponds in the first part of the definition. However, the next part of the definition\(^1\), which describes the reduction of $\beta$-redices within an expression, cannot be simply interpreted using a single rule. We have to rely on the structure of the $\lambda$-expressions. Recall that $\lambda$-expressions are either abstractions, applications, or variables. A variable itself certainly doesn’t need any rules for $\beta$-reduction, but we can have reductions happening inside abstractions and applications. The above rule is the special case where the rator is an abstraction. We still need to take the case where there is reduction happening inside an abstraction and within the rator or the rand. The following rules capture those two cases:
\[
\begin{align*}
M & \rightarrow_{\beta} P \\
\lambda x. M & \rightarrow_{\beta} \lambda x. P
\end{align*}
\]
The most important fact from this rule is that in order to show that $\lambda x. M \rightarrow_{\beta} \lambda x. P$, we must either provide a proof of $M \rightarrow_{\beta} P$, or there must exist a rule that states $M \rightarrow_{\beta} P$ is an axiom.
---
\(^{1}\)Recall from Module 2: $C[(\lambda x. M) N] \rightarrow_{\beta} C[M[N/x]]$
For applications, remember that in the original description of \(\beta\)-reduction we didn’t specify a reduction strategy. That is, we can choose to start our reduction either in the rator and the rand. for those two cases, we need separate rules:
\[
\begin{align*}
M \rightarrow_\beta P \\
MN \rightarrow_\beta PN
\end{align*}
\]
\[
\begin{align*}
N \rightarrow_\beta P \\
MN \rightarrow_\beta MP
\end{align*}
\]
We have just described how computation proceeds in \(\lambda\)-calculus. However, because a \(\lambda\)-calculus expression may match more than one of these conditions, our description is non-deterministic; we haven’t describe a particular way of computing, but all valid ways of computing. In the previous module, we made this deterministic by focusing on the selection of redices, and we will now do the same formally.
5 Defining Evaluation Order
As we mentioned earlier, evaluation order is in fact very important, since many programming languages will not non-deterministically execute code. If we were to model actual programming languages using our calculus, it is crucial to choose a reduction strategy. In this section, we are going to discuss the operational semantics of \(\lambda\)-calculus under Normal Order Reduction and Applicative Order Evaluation. Let’s first consider NOR.
**Definition 1. (Small-Step Operational Semantics of the Untyped \(\lambda\)-Calculus, NOR)**
Let the metavariable \(M\) range over \(\lambda\)-expressions. Then a semantics of \(\lambda\)-terms in NOR is given by the following rules:
\[
\frac{M \rightarrow M'}{\lambda x. M \rightarrow \lambda x. M'}
\]
\[
\frac{M_1 \rightarrow M_1' \quad \forall x. \forall M_3. M_1 \neq \lambda x. M_3}{M_1 M_2 \rightarrow M_1'[M_2]}
\]
\[
\frac{M_2 \rightarrow M_2' \quad \forall M_1'. M_1 \not\leftrightarrow M_1' \quad \forall x. \forall M_3. M_1 \neq \lambda x. M_3}{M_1 M_2 \rightarrow M_1 M_2'}
\]
The first and the second rule stayed the same as in \(\beta\)-reduction. Similar to non-deterministic \(\beta\)-reduction, we can reduce an expression that is either a redex, or an abstraction which contains a redex. However, in order to enforce NOR, we have to add additional restrictions to the third and fourth reduction rules. First of all, if \(M_1\) can be reduced further, we should reduce \(M_1\) instead; this is the reason we introduced the premise \(\forall x. \forall M_3. M_1 \neq \lambda x. M_3\) (that is if \(M_1\) is an abstraction).
Video 3.1 (https://student.cs.uwaterloo.ca/~cs442/W21/videos/3.1/): Formal semantics of NOR
Now let’s look at AOE:
**Definition 2. (Small-Step Operational Semantics of the Untyped \(\lambda\)-Calculus, AOE)**
Let the metavariable \(M\) range over \(\lambda\)-expressions. Then a semantics of the \(\lambda\)-terms in AOE is given by the following rules:
\[
\forall M_1'.M_1 \not\rightarrow M_1' \quad (\lambda x. M_1)M_2 \rightarrow M_1[M_2/x]
\]
\[
M_1 \rightarrow M_1' \quad M_1M_2 \rightarrow M_1'M_2
\]
\[
M_2 \rightarrow M_2' \quad \forall M_1'.M_1 \not\rightarrow M_1' \quad M_1M_2 \rightarrow M_1'M_2'
\]
For AOE, the first rule has the added condition that the rand (i.e. the argument) can be applied only if it can’t be reduced further. Also, the abstraction rule is removed, since we can not reduce within an abstraction; similarly, in most programming languages, you can not evaluate inside a function you didn’t yet call. The last rule has the premise \(\forall x.M_1 \neq \lambda x. M_2\) removed since again we want the argument to be fully reduced before substituting itself into an abstraction first.
6 Terminal Values
In the previous section, we used the language of predicate logic (specifically, for-all) to conditionalize productions. While this is mathematically valid, it complicates the description of the language, and makes it more difficult to prove that a particular production is the right one to use: to show that we can use the first rule, we need to demonstrate that the rand cannot be used with any of the rules. Generally speaking, the rules and conditions become much clearer if we can instead syntactically define what expressions are terminal, or final; i.e., not capable of being reduced further.
There are a few choices that we could make for possible sets of terminal values in \(\lambda\)-calculus: we could choose \(\beta\)-normal form, weak normal form, or even head normal form (only the leftmost expression is required to be in normal form). If we use anything other than \(\beta\)-normal form, we are losing the guarantee given by the Church-Rosser Theorem. Even if we use \(\beta\)-normal form as the set of terminal value, we still need to be able to answer some important questions. For example, what is the semantics of \((\lambda x. xx)(\lambda x. xx)\)? The only response we can give is “no semantics”, since it does not have a normal form. And what about \((\lambda x. \lambda y. y)((\lambda x. xx)(\lambda x. xx))\)? It has a terminal value, but not all possible legal derivations will lead to it. Should this expression be given a final value of \(\lambda y. y\) since there is a possible reduction to it, or we should say that there is no meaning? In fact, the answer depends on what one needs to achieve by designing the semantics.
For now, we focus on the steps themselves rather than the possible terminal values. As a result, we can just let our final values be “the set of values our operational semantics would produce”. In the next module, we will discuss terminal values in greater detail.
7 Showcase: A Simple Proof
In this section, we will show that \((\lambda x. \lambda y. y)((\lambda x. xx)(\lambda x. xx))x\) indeed terminates and evaluates to \(x\) under NOR. This is not a formal proof by any means; however, we will use this example to give you an idea to how programming language theorists work with semantics.
The formal way to specify that the former expression terminates and evaluates to the latter is going to look like this:
\[(\lambda x.\lambda y.y)((\lambda x.xx)(\lambda x.xx))x \rightarrow (\lambda y.y)x \rightarrow x\]
This example is quite short. However, what should we do if we are dealing with larger examples? The answer is that we need a \(\rightarrow^*\) operator. It might be useful to formally define the \(\rightarrow^*\) operator so we can show every single step at once, instead of splitting them into separate proofs:
**Definition 3. (Sequencing)** Let the metavariables \(M\) range over \(\lambda\)-expressions and \(\rightarrow\) be the operator of “one step” in any small-step operational semantics. Then \(\rightarrow^*\) is defined as so:
\[
\frac{M \rightarrow^* M}{M_1 \rightarrow^* M_2 \quad M_2 \rightarrow M_3} \quad M_1 \rightarrow^* M_3
\]
**Aside:** \(\rightarrow^*\) is the reflexive and transitive closure of \(\rightarrow\).
To keep the proof text short, we will make the following definitions:
\[A = (\lambda x.\lambda y.y), B = (\lambda x.xx)(\lambda x.xx)\]
Now we can actually start our “proof”.
\[
ABx \rightarrow^* ABx \quad ABx \rightarrow (\lambda y.y)x \\
(\lambda y.y)x \rightarrow x
\]
\[ABx \rightarrow^* (\lambda y.y)x \quad (\lambda y.y)x \rightarrow x
\]
\[(\lambda x.\lambda y.y)((\lambda x.xx)(\lambda x.xx))x \rightarrow^* x\]
Although this proof is of course trivial, with proper abstraction, we can prove similar properties of entire classes of programs. We will look at some of those properties in the next module.
**Aside:** There are also many, many other kinds of semantics. In this aside, we showcase two of them since they are also used in the field of programming languages. One of the variations of operational semantics is big-step operational semantics, which describes the terminal values every expression will evaluate to directly, rather than as the closure of smaller steps. For example, this is the big-step operational semantics for the \(\lambda\)-calculus under AOE:
\[
V \downarrow V
\]
\[
M[V_1/x] \downarrow V_2 \\
(\lambda x. M)V_1 \downarrow V_2
\]
\[
M_1 \downarrow V_1 \quad V_1M_2 \downarrow V_2 \\
M_1M_2 \downarrow V_2
\]
\[
V_2M_1 \downarrow V_3
\]
In this example, \(V\) is the metavariable over values.
Another kind of semantics is denotational semantics. Denotational semantics are used to show the correspondence from language constructs to familiar mathematical objects. Our definition of functional language
8 Adding Primitives
Around the end of Module 2, we discussed the \(\lambda\)-calculus implementations of commonly seen data types. While those discussions are very useful in showcasing the power of \(\lambda\)-calculus in representing computation, the implementations presented are not particularly practical; furthermore, it is much more efficient to make use of the computer architecture we have and implement those computations in their terms. For instance, since all computer architectures support integers (of some limited range) natively, it would be absurd to implement integers as Church numerals in a real language. As a result, in the practice of modelling real programming languages, we tend to model those as primitives. To be specific, those data types will be treated as intrinsic (i.e. built-in) values of our language. In this section, we will describe the semantic rules required if we were to add those built-in entities, since most of the semantics we see in future modules will have such intrinsics.
8.1 Booleans and Conditionals
We will first introduce the syntactic elements, in Backus Normal Form (BNF). Note that we are adding new kinds of expressions in the definition of \(\langle Expr\rangle\); We will use “…” to denote the part of the definitions of expressions that was defined in Module 2.
\[
\langle \text{Boolexp} \rangle ::= \text{true} \mid \text{false} \\
\mid \text{not} \langle \text{Boolexp} \rangle \\
\mid \text{and} \langle \text{Boolexp} \rangle \langle \text{Boolexp} \rangle \\
\mid \text{or} \langle \text{Boolexp} \rangle \langle \text{Boolexp} \rangle \\
\langle \text{Expr} \rangle ::= \ldots \\
\mid \langle \text{Boolexp} \rangle \\
\mid \text{if} \langle \text{Boolexp} \rangle \text{then} \langle \text{Expr} \rangle \text{else} \langle \text{Expr} \rangle
\]
Errata: The above definitions of “not”, “and”, “or”, and “if” demand that the subexpressions be boolean expressions. The following definition of number binops and lists have a similar problem, restricting part of the expression to only expressions of a particular type. We got a bit ahead of ourselves, thinking about types in module 4; in all of these cases, any expression is allowed.
These syntactic elements are very similar to the ones you have seen from Module 2. However, they are now actually part of the syntax; that is, there is no \(\lambda\)-calculus representation for them. Programs in the \(\lambda\)-calculus with boolean primitives are simply \(\lambda\)-calculus expressions with additional syntax for boolean expressions, like so:
\[
\lambda x. \lambda y. \text{if } x \text{ then } y \text{ else false}
\]
We will now describe the operational semantics for this new language. Let the metavariables \(B\) and \(E\) range over all boolean expressions and all \(\lambda\)-expressions, respectively. We will start with “not”:
\[
\begin{align*}
\text{not true} & \rightarrow \text{false} \\
\text{not false} & \rightarrow \text{true} \\
\text{not } B & \rightarrow \text{not } B'
\end{align*}
\]
For “and” and “or”, we want the computation of the first parameter to happen first. In addition, we would like short-circuiting behavior for them; i.e., the evaluation of the second operand should not proceed if the synthesis is known from the first.
\[
\begin{align*}
\text{and false } B & \rightarrow \text{false} \\
\text{and true } B & \rightarrow B \\
\text{and false } B_1 & \rightarrow \text{false} \\
\text{and true } B_1 & \rightarrow \text{true} \\
\end{align*}
\]
The last two rules are how we describe “the first argument must be fully evaluated before the second one”. The first rule describes the short-circuiting behavior: whether the second argument is evaluated or not, as long as the first argument evaluates to “false”, the whole “and” evaluates to false.
Exercise 1. Write the semantic rules for “or”.
Now we add the rules for if statements:
\[
\begin{align*}
\text{if true then } E_1 \text{ else } E_2 & \rightarrow E_1 \\
\text{if false then } E_1 \text{ else } E_2 & \rightarrow E_2 \\
\end{align*}
\]
8.2 Numbers
Note that we will restrict our definition to natural numbers. Also, we are working in an “imaginary machine”, so we don’t care about overflows (i.e., we assume that we can represent numbers of an infinite range). We will make the following definitions in our syntax, again in BNF:
\[
\begin{align*}
\langle \text{Num} \rangle & ::= 0 \mid 1 \mid \cdots \\
& \mid \langle \text{Num} \rangle \langle \text{Num} \rangle \\
\langle \text{NumBinOps} \rangle & ::= + \mid - \mid \ast \mid / \\
\langle \text{Expr} \rangle & ::= \cdots \\
& \mid \langle \text{Num} \rangle
\end{align*}
\]
We will now consider the semantics of binary operations. Let \( M, N \) range over numeric expressions, \( a, b \) range over natural numbers. Starting from addition:
\[
\begin{align*}
\text{a + b = c} & \rightarrow (a + b) \rightarrow c \\
\text{M} & \rightarrow \text{M}' \\
\text{M} \rightarrow \text{M}' & \rightarrow (+M + N) \rightarrow (+M' + N) \\
\text{M} & \rightarrow \text{M}' \\
\text{M} \rightarrow \text{M}' & \rightarrow (+a M) \rightarrow (+a M')
\end{align*}
\]
Note that this set of rules forces the first argument (i.e. left hand side) to be evaluated before the second argument is evaluated. Also note that we’re describing our language, the \( \lambda \)-calculus with numbers, in terms of the language of arithmetic, with the predicate \( a + b = c \).
Let’s now look at subtraction.
\[
\begin{align*}
\text{a - b = c} & \rightarrow (a - b) \rightarrow c \\
\text{M} & \rightarrow \text{M}' \\
\text{M} \rightarrow \text{M}' & \rightarrow (-M + N) \rightarrow (-M' + N) \\
\text{M} & \rightarrow \text{M}' \\
\text{M} \rightarrow \text{M}' & \rightarrow (-a M) \rightarrow (-a M')
\end{align*}
\]
The semantics for subtraction is almost the same with addition, but there is one difference: to actually compute \( a - b \), we need to make sure that \( a - b \) is a natural number. With this expression, there is no rule to match expressions
like $(-2, 3)$, and so such expressions cannot be reduced. We describe this phenomenon as "getting stuck", and in the next module, we will dive into this issue and discuss the significance of an expression getting stuck. Another way of handling this is to actually allow such subtraction, but define the result as something arbitrary and perhaps counter-intuitive, such as 0:
$$a - b = c \quad c \notin \mathbb{N}$$
$$(-a b) \rightarrow 0$$
Although this definition is unintuitive, it is not incorrect: we are defining our language’s $-$, and if it doesn’t match perfectly with the $-$ of arithmetic, that is part of the definition. Indeed, in real programming languages with integers of a limited size, no mathematical operations match their arithmetic definitions perfectly, because of overflow, but these languages are still valid and well-defined.
**Exercise 2.** Write the semantic rules for $\ast$ and $/$ (use integer division; think about how do you handle zero division.)
**Exercise 3.** Propose changes to the syntax rules and add new semantic rules, so we have pred and succ, which are unary functions for getting predecessor and successor of a number, in our language. Note: pred $0 = 0$.
### 8.3 Lists
In this section we will discuss lists. We will use the representation you should be well familiar with: a list containing $1, 2, 3$ will be
$$(\text{cons } 1 \text{ (cons } 2 \text{ (cons } 3 \text{ empty)))) = [1, 2, 3]$$
We will use a short-hand in mathematics to make our semantic rules compact: $L_1 + L_2$ will be operator to append $L_1$ to the start of $L_2$. For example: $[1] + [2] = [1, 2]$. We will also assume that $+$ works for “empty”, the empty list.
Again, we will list the syntactic elements of lists here:
$$(\text{ListExpr}) ::= \text{empty} \mid (\text{cons } \langle \text{Expr} \rangle \langle \text{ListExpr} \rangle) \mid [\langle \text{Expr} \rangle \langle \text{ListRest} \rangle]$$
$$\langle \text{ListRest} \rangle ::= \varepsilon \mid , \langle \text{Expr} \rangle \langle \text{ListRest} \rangle$$
$$\langle \text{Expr} \rangle ::= \ldots$$
$$\mid \langle \text{ListExpr} \rangle$$
$$\mid \text{first } \langle \text{ListExpr} \rangle \mid \text{rest } \langle \text{ListExpr} \rangle$$
The recursive definition of lists is essentially identical to the recursive data definition of the Racket list you saw in first-year courses.
Here are the semantic rules. Let the metavariables $L, E$ range over list expressions and $\lambda$-expression respectively:
$$L_2 = [E] + L_1 \quad \forall E_1, E \not\rightarrow E_1 \quad (\text{cons } E L_1) \rightarrow L_2$$
$$L_1 = [E] + L_2$$
$$L_1 = [E] + L_2$$
$$(\text{first } L_1) \rightarrow E$$
$$(\text{rest } L_1) \rightarrow L_2$$
Note that the premises in the form $L_1 = [E] + L_2$ implies that $L_1$ is not empty.
At last, don’t forget that we want expressions to reduce inside those built-in functions:
\[
\begin{align*}
E_1 & \to E'_1 \\
(\text{cons } E_1 E_2) & \to (\text{cons } E'_1 E_2) \\
\forall E_3. E_1 \not\to E_3 \\
(\text{cons } E_1 E_2) & \to (\text{cons } E_1 E'_2) \\
E_1 & \to E'_1 \\
(\text{first } E_1) & \to (\text{first } E'_1) \\
E_1 & \to E'_1 \\
(\text{rest } E_1) & \to (\text{rest } E'_1)
\end{align*}
\]
Note that our definition of lists has been slightly less formal than our previous definitions, as we relied on an informally described mathematical language of lists for our predicates. It is not uncommon for formal semantics to have some quasi-formal “holes” like this, though obviously it is preferable to define everything as precisely as possible.
8.4 Sets
A set is a mathematical collection of distinct objects. In real programming languages, it is usually implemented by a hash-map. However, when formulating a semantics for sets, we usually do not need to worry about their actual implementation; we can just treat it as a mathematical object. So long as the implementation provides the same observable behavior, it is correct.
The syntax of sets will be as follows:
\[
\langle SetExpr \rangle ::= \text{empty} \mid \{\langle Expr \rangle \langle SetRest \rangle\} \\
\mid \text{insert } \langle Expr \rangle \langle SetExpr \rangle \\
\mid \text{remove } \langle Expr \rangle \langle SetExpr \rangle \\
\langle SetRest \rangle ::= \varepsilon \mid \langle Expr \rangle \langle SetRest \rangle \\
\langle Expr \rangle ::= \cdots \\
\mid \langle SetExpr \rangle \\
\langle Boolean \rangle ::= \cdots \\
\mid \text{contains? } \langle Expr \rangle \langle SetExpr \rangle
\]
Let metavariables \( S, E \) range over be set expressions and all \( \lambda \)-expressions respectively:
\[
\begin{align*}
\forall E_1. E \not\to E_1 \\
\text{(insert } E \text{ empty}) & \to \{E\} \\
\forall E_1. E \not\to E_1 \\
\text{(remove } E \text{ empty}) & \to \text{empty} \\
\forall E_1. E \not\to E_1 \\
E & \in S \\
\text{(contains? } E S \text{)} & \to \text{true} \\
\forall E_1. E \not\to E_1 \\
S' = S \cup \{E\} \\
\text{(insert } E S \text{)} & \to S' \\
\forall E_1. E \not\to E_1 \\
S' = S \setminus \{E\} \\
\text{(remove } E S \text{)} & \to S' \\
\forall E_1. E \not\to E_1 \\
E \not\in S \\
\text{(contains? } E S \text{)} & \to \text{false}
\end{align*}
\]
Exercise 4. Write the semantic rules for set where at least one argument is not fully reduced.
Note that again, we have described our own sets in terms of the language of set theory.
9 Fin
In the next module, we will introduce types, which allow us to prove certain properties of languages, including that the semantics do not “get stuck”, by categorizing the kinds of values that may undergo certain operations.
Rights
Copyright © 2020, 2021 Yangtian Zi, Gregor Richards, Brad Lushman, and Anthony Cox.
This module is intended for CS442 at University of Waterloo.
Any other use requires permission from the above named copyright holder(s).
|
{"Source-Url": "https://student.cs.uwaterloo.ca/~cs442/W21/notes/pdf/Module-3.pdf", "len_cl100k_base": 8844, "olmocr-version": "0.1.51", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 37816, "total-output-tokens": 9779, "length": "2e13", "weborganizer": {"__label__adult": 0.0007219314575195312, "__label__art_design": 0.0008006095886230469, "__label__crime_law": 0.0004825592041015625, "__label__education_jobs": 0.0146942138671875, "__label__entertainment": 0.00019502639770507812, "__label__fashion_beauty": 0.0003180503845214844, "__label__finance_business": 0.00034618377685546875, "__label__food_dining": 0.00107574462890625, "__label__games": 0.0011034011840820312, "__label__hardware": 0.0012378692626953125, "__label__health": 0.000919818878173828, "__label__history": 0.0005273818969726562, "__label__home_hobbies": 0.00028228759765625, "__label__industrial": 0.0011005401611328125, "__label__literature": 0.0016269683837890625, "__label__politics": 0.0004856586456298828, "__label__religion": 0.0012912750244140625, "__label__science_tech": 0.063232421875, "__label__social_life": 0.00032782554626464844, "__label__software": 0.004772186279296875, "__label__software_dev": 0.90234375, "__label__sports_fitness": 0.0005598068237304688, "__label__transportation": 0.0014324188232421875, "__label__travel": 0.00037479400634765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34824, 0.01475]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34824, 0.68548]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34824, 0.85946]], "google_gemma-3-12b-it_contains_pii": [[0, 3687, false], [3687, 8767, null], [8767, 11630, null], [11630, 14561, null], [14561, 17112, null], [17112, 20416, null], [20416, 22928, null], [22928, 25961, null], [25961, 28959, null], [28959, 31789, null], [31789, 34364, null], [34364, 34824, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3687, true], [3687, 8767, null], [8767, 11630, null], [11630, 14561, null], [14561, 17112, null], [17112, 20416, null], [20416, 22928, null], [22928, 25961, null], [25961, 28959, null], [28959, 31789, null], [31789, 34364, null], [34364, 34824, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34824, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34824, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34824, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34824, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34824, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34824, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34824, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34824, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34824, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34824, null]], "pdf_page_numbers": [[0, 3687, 1], [3687, 8767, 2], [8767, 11630, 3], [11630, 14561, 4], [14561, 17112, 5], [17112, 20416, 6], [20416, 22928, 7], [22928, 25961, 8], [25961, 28959, 9], [28959, 31789, 10], [31789, 34364, 11], [34364, 34824, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34824, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
9517004c5e1a1559c13615837759ea6795bfd2ee
|
An efficient data exchange mechanism for chained network functions
Original
Availability:
This version is available at: 11583/2695146 since: 2018-05-03T13:24:50Z
Publisher:
Elsevier
Published
DOI:10.1016/j.jpdc.2017.12.003
Terms of use:
openAccess
This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository
Publisher copyright
elsevier
(Article begins on next page)
An Efficient Data Exchange Mechanism for Chained Network Functions
Ivano Cerrato*, Guido Marchetto*, Fulvio Risso, Riccardo Sisto, Matteo Virgilio, Roberto Bonafiglia
*Department of Control and Computer Engineering
Politecnico di Torino
Torino, Italy, 10129
Abstract
Thanks to the increasing success of virtualization technologies and processing capabilities of computing devices, the deployment of virtual network functions is evolving towards a unified approach aiming at concentrating a huge amount of such functions within a limited number of commodity servers. To keep pace with this trend, a key issue to address is the definition of a secure and efficient way to move data between the different virtualized environments hosting the functions and a centralized component that builds the function chains within a single server. This paper proposes an efficient algorithm that realizes this vision and that, by exploiting the peculiarities of this application domain, is more efficient than classical solutions. The algorithm that manages the data exchanges is validated by performing a formal verification of its main safety and security properties, and an extensive functional and performance evaluation is presented.
Keywords: parallel algorithms, high speed packet processing, data exchange mechanism, network function virtualization
1. Introduction
New paradigms have recently emerged that aim at transforming the network into a more flexible and programmable platform. In particular, Network Function Virtualization (NFV) [1] proposes to replace dedicated middleboxes, used to deliver a multitude of network services by means of a growing number of (cascading) dedicated appliances, with software images that run on general-purpose servers. This allows leveraging high-volume standard machines (e.g., Intel-based blades) and computing virtualization to consolidate and optimize the processing in the data plane of the network. This results in a more flexible deployment
Journey of a specific packet within the middlebox
Figure 1: Function chains deployed in a middlebox.
of network applications (e.g., NAT, firewall), in lower capital and operating costs for the hardware thanks to the possibility to deploy many different (small) Virtual Network Functions (VNF) on the same (standard) computer, and in simpler and more reliable networks. In addition, while appliances are often dedicated to a single tenant, servers can be multitenant, hence being able to host services belonging to different actors [2] (e.g., a traffic monitor belonging to the operator and a firewall belonging to a given company). This brings even more advantages in terms of consolidation.
When several VNFs are executed in the same server, an incoming packet can traverse an arbitrary number of VNFs before leaving the middlebox (i.e., a function chain, as shown in Figure 1). The exact sequence of functions traversed by a packet can be determined only at run-time, by inspecting the packet. In fact, (i) packets belonging to different tenants can traverse different functions, and (ii) packets belonging to the same tenant can experience different paths (e.g., when only a portion of traffic needs to undergo a deep packet inspection). Packets can also be modified in transit (e.g., a NAT changes the source IP address), hence requiring that a packet is re-analyzed when it leaves a VNF, to determine what is next. As depicted in Figure 1, this requires that each server includes a module (usually referred as virtual switch or vSwitch) that classifies each packet to determine which is the next function to traverse, and then delivers the packet to it.
The several software (and virtualization) layers that packets may need to traverse within an NFV middle-
box may cause a drop of performance of virtualized functions chains, especially when several functions have to process the packets. This results in a worsening of the quality of experience for the end users, whose traffic is processed by VNFs before reaching its final destinations. Then, based on some preliminary work [3], this paper proposes and evaluates an efficient algorithm for moving network packets between VNFs consolidated on the same server and the vSwitch. Particularly, this work exploits circular lock-free First-In-First-Out (FIFO) buffers managed by ad-hoc algorithms.
Existing solutions adopted to move packets between VNFs and the vSwitch are usually based on the producer-consumer paradigm. However, since in NFV it is likely that a packet goes from the vSwitch to the VNF and then back to the vSwitch, those approaches require the VNF to copy (almost) each packet from a first receiving queue into a second queue used for sending it back. Instead, our proposal has the peculiarity of allowing VNFs to return back packets without any (expensive) packet copy, with consequent performance improvements.
Particularly, the proposed mechanism is designed to satisfy the following principles. First, guarantee traffic isolation between functions, so that a function can only access the portion of traffic that is expected to flow through it. This limits the potential hazards due to malicious applications and provide an effective support to multitenancy. Second, provide excellent scalability by allowing to consolidate a huge number of VNFs on the same server. Third, achieve high performance in terms of data movement speed among different VNFs, similarly to what is required in physical servers among different hardware modules [4]. Scalability and performance are obtained also by taking care of implementation details such as exploiting cache locality as much as possible and limiting the number of context switches. The correctness of the data exchange algorithm (e.g. absence of concurrency hazards) is guaranteed by means of formal verification.
The work of this paper targets VNFs that (i) implement a pass-through behavior (each packet received is sent again to the vSwitch), (ii) may drop packets, (iii) may generate new packets as a consequence of a packet just received (e.g., an ARP reply as a consequence of an ARP request). This allows us to cover the vast majority of network middleboxes, including for example NATs, firewalls, traffic monitors, and intrusion detection systems. Instead, applications that may need to generate new packets asynchronously, e.g., in order to open a connection with some remote service or to retransmit a TCP packet, are out of the scope of this paper and left as a future work. Note that the class of applications not supported by our proposal also includes cloud-like services (e.g., web servers, databases), since they typically need to generate new packets asynchronously and do not implement a pass-through behavior with respect to network traffic (they are in fact the final destination of network packets). Moreover, the bottleneck of these applications is typically in
the user code instead of in the network layer, hence they are not the targeted applications in this paper.
The proposed algorithm is agnostic both to the data actually exchanged and to the actors that operate on such data. Then, although the use case that motivated our research is the efficient implementation of virtualized functions chains, the algorithm can be exploited in other contexts as well, provided that modules exchanging data implement a pass-through behavior and do not send new data asynchronously.
The rest of the paper is organized as follows. Section 2 explores related existing solutions able to exchange data among different software components. Given the nature of our solution, we are particularly interested in covering FIFO queue designs and producer/consumer paradigms, in order to emphasize the differences between our work and the existing mechanisms. Section 3 details the operating principles of the proposed algorithm and the way the communication is managed by the different modules. Section 4 presents a formal verification of the data exchange algorithm, which rigorously proves its correctness from a safety and security perspective. Section 5 proposes some implementation guidelines that can be used to further improve the performance of the data exchange. Section 6 presents a wide range of experiments that validate our algorithm both in ideal conditions and in real scenarios. Finally, Section 7 concludes the paper.
2. Related Work
The efficient lock-free implementation of FIFO queues has been largely investigated in the past. For instance, [5] and [6] propose lock-free algorithms that operate on FIFO queues managed as non-circular linked-lists. Similar proposals can be found in [7] and [8], which also require to manage a pool of pre-allocated memory slots. However, all the solutions proposed so far are usually based on uni-directional flows of data according to the producer-consumer paradigm, which is not an optimal solution for managing the bi-directional data flows occurring in the virtualized environments we are considering. In fact, in these environments, a packet typically goes from the virtual switch to the VNF and then back to the virtual switch. Using classical uni-directional producer-consumer solutions requires the VNF to remove data just received from a first queue and to write them into a second queue used for sending the data back. This implies that data are always copied once in this trip, which may limit the throughput of the system particularly when several functions have to be traversed (hence several copies have to be carried out).
Another possible way to efficiently exchange data between applications can be seen in the context of a lock-free operating system, where [9] and [10] present a single producer/consumer and a multi-producer/multi-consumer algorithm to manage circular FIFO queues. A similar idea has been proposed by Intel in the DPDK library [11] and in [12], whose algorithms have been designed to operate in contexts where many processes
can concurrently insert items into a shared buffer or remove them. However, those proposals are not applicable in our case because they cannot guarantee isolation between VNFs due to the presence of a unique shared buffer. Similar considerations can be made for ClickOS [13, 14] (based on the VALE virtual switch [15]) and NetVM [16], which instead targets network function chains. ClickOS uses two unidirectional queues with the necessity to copy packets once; NetVM uses two unidirectional queues between “untrusted” functions, while switching to a unique shared buffer (handled in zero-copy) among “trusted” functions, hence impairing traffic isolation requirement. MCRingBuffer [17] defines instead an algorithm to exchange data between one producer and one consumer running on different CPU cores, which is particularly interesting for its efficient implementation of memory access patterns. In fact, part of those techniques were reused in our implementation as well (Section 5).
Solutions such as Xen [18] and Hyper-Switch [19] address the problem of efficiently exchanging packets between different entities such as virtual machines (VM) running on the same server. However, their architecture is designed for packets that originate or terminate their journey in a VM, not for pass-through functions. This implies different architectural choices such as different buffers for packets flowing in different directions, albeit integrated with a complex data exchange mechanism based on swapping memory pages [18]. It is also worth mentioning that network-aware CPU management techniques have been proposed in the context of Xen for improving the performance of virtual servers hosting these network applications [20].
Virtual switches such as OpenvSwitch (OVS) [21] and the eXtensible Datapath daemon (xDPd) [22] are used to implement network function chains (as shown respectively in [23, 24]), although they appear in some way orthogonal to our proposal. In fact, they implement the classification and forwarding mechanism (either based on the traditional L2 forwarding or on the more powerful Openflow protocol [25]), but do not focus on the data exchange mechanism, which is often based on bi-directional FIFO queues (in some case a shared memory can be configured). In this respect, our mechanism can be built on top of those existing solutions to improve their data transfer capabilities, hence the overall performance of the system.
As a final remark, it is worth pointing out that this paper focuses on the problem of efficiently moving packets between different functions within a network middlebox, while it does not consider the problem of efficiently receiving/sending packets from/to the network. This aspect, orthogonal to our proposal, is instead considered in [26], [27] and [11].
3. The data exchange algorithm
As described in Section 1, the implementation of efficient virtual functions chains within a single middlebox requires a fast and efficient mechanism to move data between the vSwitch and the VNFs. This translates into the necessity of a dedicated data dispatching algorithm, being this component one of those that has the biggest impact on the system performance.
The algorithm we designed to satisfy the above necessity is described in the remainder of this section. In particular, we first provide an overview of such an algorithm and the intuitions that are behind our proposal, then we will dive in the details of the algorithm itself.
In the context of our algorithm, we define the Master as the module that plays the role of the vSwitch, while VNFs are represented by modules called Workers. Moreover, a token is a generic data unit exchanged between the Master and the Workers. In fact, as stated in Section 1, although the token represents a packet in the NFV use case that motivated our research, the proposed algorithm can be actually used to exchange any kind of data, according to the specific use case implemented.
3.1. Algorithm overview
As shown in Figure 2, the proposed data exchange algorithm is based on a set of lock-free ring buffers. In particular, the Master shares two different buffers with each Worker, which are managed through different (but not independent) parts of the same exchange algorithm.
Figure 2: Deployment of the algorithm within a middlebox.
The intuition behind our proposal, which derives from the NFV use case, is the following. According to Figure 1, VNFs are pieces of software operating on the data plane of the network and that mainly process pass-through packets. In fact, VNFs receive packets from the vSwitch and, in the vast majority of cases, forward them back to the vSwitch itself with minimal (or no) changes, allowing packets to continue their journey towards the final destination. To efficiently support pass-through data, we then defined the primary buffer shown in Figure 2, which has the peculiarity of allowing tokens to be moved both from the Master to the Worker, and then back from the Worker to Master, without requiring any (expensive) copy of data in the Worker itself. Avoiding to copy each packet in each traversed VNF can save in fact many CPU cycles and consequently improve the performance of virtualized functions chains. Notably, in addition to VNFs operating on pass-through data, the primary buffer also supports functions that need to drop packets (e.g., firewall). Particularly, dropped packets should not be sent back to the vSwitch after their processing in the VNF.
Some network functions may need to send new packets as a consequence of a previously received packet. For example, a bridging module may need to duplicate a broadcast packet several times (e.g., once for each interface of the middlebox) and then provide all these copies to the next function in the chain. Similarly, another function may extend a packet (e.g., by adding a new header) so that it exceeds the MTU of the network; this packet must then be fragmented, and all the fragments must be sent out. Since network applications forward most of the traffic without generating new packets, we decided to keep the primary buffer as simple as possible for the sake of speed. We then defined a new second lock-free ring buffer, i.e., the auxiliary buffer of Figure 2, to support Workers that can possibly generate new tokens as a consequence of the data received from the Master. It is worth noting that this second buffer is unidirectional and it is only used by the Worker to provide “new” data to the Master.
Since VNFs may belong to different tenants, it is necessary to guarantee that a network function only accesses to the proper portion of network traffic. We then use a different pair of buffers per Worker in order to guarantee traffic isolation among them, as this ensures that a Worker can only access packets that are expected to flow through the Worker itself.
Each buffer slot (both primary and auxiliary) includes some flags in addition to the real data, which are used to identify the content of the slot itself; more details will be presented in the next sections. Finally, buffer slots are currently of fixed length and equal to the network MTU size; however, the extension of the algorithm to handle variable slot sizes, tailored to the length of the packet actually received, is trivial.
3.2. Execution model
The Master operates in polling mode, i.e., it continuously checks for new tokens and inserts them into the primary buffer shared with the target Worker. This operating mode has been chosen because the middlebox (and then the Master itself) is supposed to be traversed by a huge amount of traffic; hence, a blocking model would be too penalizing because it would require an interrupt-like mechanism to start the Master whenever new data are available. This could significantly drop the performance with high packet rates [28]. In fact, interrupt handling is expensive in modern superscalar processors because they have long pipelines and support out of order and speculative execution [29], which increase the penalty paid by an interrupt.
Vice versa, since the traffic entering into a specific Worker is potentially a small portion compared to the one handled by the Master, a blocking model looks more appropriate for this module. This ensures the possibility to share CPU resources more effectively, which is important in multi-tenant systems where potentially a large number of Workers is active. Hence, when a Worker has no more data to be processed, it suspends itself until the Master wakes it up by means of a shared semaphore.
3.3. Basic algorithm: handling pass-through data
The algorithm used to move data from the Master to the Workers (and back) requires the sharing of some variables (underlined in the pseudocode shown in the following), a semaphore, and the primary buffer between the Master and each Worker. In particular, in this section we assume the presence of the Master and a single Worker, while its extension to several Workers is trivial.
The primary buffer is operated through four indexes. \texttt{M.prodIndex} and \texttt{W.prodIndex} are shared between the Master and the Worker. The former index points to the next empty slot in the buffer, ready to be filled by the Master, while the latter points to the next slot in the buffer that the Worker will make available to the Master again after its processing. \texttt{M.prodIndex} is incremented by the Master when it enqueues new tokens, while \texttt{W.prodIndex} is incremented by the Worker when it makes processed tokens available to the Master again. \texttt{M.consIndex} is a private index of the Master and points to the next token that the Master itself will remove from the buffer. Finally, \texttt{W.consIndex} is a private index of the Worker and points to the next token to be processed by the Worker itself. In addition to these indexes, the algorithm exploits the shared variable \texttt{workerStatus}, which indicates whether the Worker is suspended or it is running.
Algorithm 1 provides the overall behavior of the Master and shows how it cyclically repeats the following three main operations. First, in lines 14-21 it produces new data (line 19), which corresponds to the reception of packets from the network interface card (NIC) in the NFV use case, and immediately provides them to the Worker through the primary buffer (line 20). Second, it reads the tokens already processed by the Worker
from the primary buffer (line 22). Third, it wakes up the Worker if there are data waiting for service for a long time in order to avoid starvation (line 23). From lines 14-21, it is evident that the Master produces several tokens consecutively, in order to better exploit cache locality. Furthermore, if the buffer is full (line 15), it stops data production and starts removing the tokens already processed by the Worker from the buffer.
**Algorithm 1** Executing the Master
```
1: Procedure master.do()
2: 3: {Initialize shared variables}
4: M.prodIndex ← 0
5: W.prodIndex ← 0
6: workerStatus ← WAIT_FOR_SIGNAL
7: 8: {Initialize private variables of the Master}
9: M.consIndex ← 0
10: 11: 12: {Execute the algorithm}
13: while true do
14: for i = 0 to (i < N or timeout()) do
15: if M.prodIndex == (M.consIndex−1) then
16: {The buffer is full}
17: break
18: end if
19: data ← master.produceData()
20: master.writeDataIntoBuffer(data)
21: end for
22: master.readDataFromBuffer()
23: master.checkForOldData()
24: end while
```
Algorithm 2 details the mechanism implemented in the Master to send data to the Worker. As shown by line 8, a token is inserted into the slot pointed by the shared index \(M\text{.prodIndex}\) as soon as it is produced. However, the Worker is awakened only if at least a given number of tokens (i.e., MASTER_PKT_THRESHOLD) are waiting for service in the primary buffer (lines 10-13). Thanks to this threshold, we avoid to wake up the Worker for each single token that needs to be processed, which results in performance improvement because (i) it reduces the number of context switches and (ii) it increases cache locality, for both data and code. Since a token is inserted into the buffer as soon as it is produced (regardless of the fact that the Worker is running or not), and since the Worker will suspend itself only when the buffer is empty (as detailed in Algorithm 5), the Worker is able to process a huge amount of data consecutively, thus improving system performance.
Our algorithm avoids the starvation of tokens sent to a Worker (which may happen especially when
Algorithm 2 The Master writing data into the primary buffer
1: Procedure master.writeDataIntoBuffer(Data d)
2: 3: if M.prodIndex == M.consIndex then
4: {The buffer is empty}
5: timeStamp ← now()
6: end if
7: 8: buffer.write(M.prodIndex, d)
9: 10: if buffer.size() > MASTER_PKT_THRESHOLD and
11: (workerStatus ≠ SIGNALED) then
12: workerStatus ← SIGNALED
13: wakeUpWorker()
14: end if
the system is in underload conditions) thanks to a timeout event that wakes up the Worker even if the
above-mentioned threshold is not reached yet. In particular, the Master acquires and stores the current
time whenever it inserts a new token and the buffer is empty (lines 3-6 of Algorithm 2). This way, the
Master knows the age of the oldest token and it is able to possibly wake up the Worker also depending on
the value of a given time threshold, as shown in Algorithm 3.
Algorithm 3 The Master waking up the Worker due to a timeout
1: Procedure master.checkForOldData()
2: 3: if buffer.size() > 0 and (workerStatus ≠ SIGNALED) and
4: ((now() – timeStamp) > TS_THRESHOLD) then
5: workerStatus ← SIGNALED
6: wakeUpWorker()
7: end if
The functions described in Algorithm 2 and Algorithm 3 need to know whether the Worker is already
running or not in order to avoid useless Worker awakenings. This information is carried by the shared
variable workerStatus, which is set to SIGNALED by the Master just before waking up the Worker (line
11 of Algorithm 2 and line 4 of Algorithm 3), and changed to WAIT_FOR_SIGNAL by the Worker just before
suspending itself (line 22 of Algorithm 5). This way, the Master can test this shared variable to have an
indication about the Worker status, and then wake it up only when necessary.
Algorithm 4 shows how the Master removes from the primary buffer the data that have already been
processed by the Worker. In particular, it consumes all the tokens until the index M.consIndex does not
reach the index W.prodIndex, incremented by the Worker each time it has handled a batch of tokens, as
detailed in Algorithm 5. In this way, also the Master reads several consecutive data from the primary buffer
in order to better exploit cache locality.
Algorithm 4 The Master reading data from the primary buffer
1: Procedure master.readDataFromBuffer()
2: 3: if buffer.size() then
4: if M.consIndex ≠ W.prodIndex then
5: timeStamp ← now()
6: while M.consIndex ≠ W.prodIndex do
7: if not buffer.dropped(M.consIndex) then
8: master.consumeData(buffer.read(M.consIndex))
9: end if
10: M.consIndex++
11: end while
12: end if
13: end if
Notice that Algorithm 4 also considers those tokens provided by the Master to the Worker, and dropped by the Worker itself. In case of dropped data, the Master receives back an empty slot, identified through the flag dropped. The content of a slot is only consumed if this flag is zero, otherwise the Master just increments the M.consIndex and moves on to the next slot of the buffer, as shown in lines 7-10. This prevents the Master from reading a slot with a meaningless content.
Algorithm 5 details the operations of the Worker. As evident from lines 12-23, whenever a Worker wakes up, it processes all the tokens available in the primary buffer (i.e., all the slots of the buffer with indexes less than M.prodIndex). Only at this point (line 24), as well as after it has processed a given amount of data (lines 13-16), the Worker updates the shared index W.prodIndex, so that the Master can consume all the tokens already processed by the Worker itself. This way, the Master will be notified for data availability only when a given amount of tokens are ready to be consumed, with a positive impact on performance. It is worth noting that this batching mechanism is different from the one implemented when the Master sends data to the Worker. In fact, in that case, the Worker is woken up when the amount of data into the buffer is higher than a threshold, although the M.prodIndex, used by the Worker to understand when it has to suspend itself, is incremented each time a new data is inserted. Here, instead, the W.prodIndex (i.e., the index used by the Master to know when the consuming of tokens must be stopped) is not updated each time the Worker processes a data. As a consequence, it is possible that some tokens have already been processed by the Worker, but the W.prodIndex has still to be updated, and then the Master cannot consume them in the current execution of Algorithm 4. This results in a slightly higher latency for these tokens, but in better performance for the system thanks to this batching processing enabled into the Master. As a final remark, lines 18-20 show that the Worker can drop the token under processing by setting the dropped flag in the current slot of the primary buffer.
Algorithm 5 Executing the Worker
1: Procedure worker.do()
2: 3: {Initialize private variables of the Worker}
4: W.consIndex ← 0
5: pkts.processed ← 0
6: 7: {Execute the algorithm}
8: while true do
9: waitForWakeUp()
10: W.consIndex ← W.prodIndex
11: pkts.processed ← 0
12: while W.consIndex ≠ M.prodIndex do
13: if pkts.processed == WORKER_PKT_THRESHOLD then
14: pkts.processed ← 0
15: W.prodIndex ← W.consIndex
16: end if
17: toBeDropped ← buffer.process(W.consIndex)
18: if toBeDropped then
19: buffer.setDropped(W.consIndex)
20: end if
21: W.consIndex++
22: pkts.processed++
23: end while
24: W.prodIndex ← W.consIndex
25: workerStatus ← WAIT_FOR_SIGNAL
26: end while
Figure 3 depicts the status of the primary buffer\(^1\) and the indexes used by the algorithm in four different time instants. In Figure 3(a) the buffer is empty, and then all the indexes point to the same position. Instead, in Figure 3(b) the Master has already inserted some data into the buffer, but the Worker is still waiting since the MASTER_PKT_THRESHOLD has not been reached yet. Figure 3(c) depicts the situation in which the Master has woken up the Worker, which has already processed two items. Notice that, since the WORKER_PKT_THRESHOLD has not been reached yet, the W.prodIndex still points to the oldest token in the buffer. Instead, in Figure 3(d) this threshold is passed and the Master has already consumed some data.
3.4. Extended algorithm: handling Worker-generated data
Our algorithm also supports Workers that may need to generate new data as a consequence of the token just received from the Master. However, this cannot be done with the primary buffer alone, as Workers cannot inject new data into the primary buffer. The Worker can in fact just modify (potentially completely) pass-through tokens in the primary buffer or, at most, it can drop these tokens.
Then, in case new data have to be provided to the Master, the Worker can use the auxiliary buffer. This buffer, in which the Worker acts as the producer while the Master plays the role of the consumer, is
\(^1\)For the sake of clarity, the figure represents the shared buffer as an array instead of a circular FIFO queue.
Figure 3: Run-time behavior and indexes of the algorithm.
managed through two indexes; moreover, it requires a further flag in each slot of the primary buffer, which indicates whether the next token should be read from the primary or the auxiliary buffer.
Algorithm 6 details how the Worker sends new data to the Master, as a consequence of the processing of the token at position \( W.\text{consIndex} \) in the primary buffer. As shown in lines 3-11, several data can be generated for a single token received from the Master, and all these data are then linked to the same slot of the primary buffer. A first flag, called \( \text{aux} \), is set in the slot of the primary buffer to signal to the master that the next slot to read is the one on top of the auxiliary buffer (line 13). Instead, the \textit{next} flag set in a slot of the auxiliary buffer tells that the next packet has still to be read from the auxiliary buffer, instead of returning to the next slot of the primary buffer.
Algorithm 6 The Worker writing new data into the auxiliary buffer
1: Procedure worker.writeDataIntoAuxBuffer(Data[] newData, Index \( W.\text{consIndex} \))
2: \[\]
3: \[\text{while } data \leftarrow \text{newData.next()} \text{ do} \]
4: \[\text{if } \text{auxProdIndex} == (\text{auxConsIndex} - 1) \text{ then} \]
5: \[\{ \text{The auxiliary buffer is full} \} \]
6: \[\text{break} \]
7: \[\text{end if} \]
8: \[\text{auxBuffer.write(auxProdIndex, data)} \]
9: \[\text{auxBuffer.setNext(auxProdIndex)} \]
10: \[\text{auxProdIndex++} \]
11: \[\text{end while} \]
12: \[\text{auxBuffer.resetNext(auxProdIndex-1)} \]
13: \[\text{buffer.setAux(W.\text{consIndex})} \]
The reading procedure is described in Algorithm 7. When the Master encounters a slot with the \( \text{aux} \) flag set in the primary buffer, it processes a number of tokens in the auxiliary buffer, starting from the slot pointed by \( \text{auxConsIndex} \) until the \textit{next} flag is set. Moreover, according to lines 4-7 of Algorithm 6, if the \text{auxBuffer} is full, new tokens that the Worker may want to send to the Master are dropped.
Algorithm 7 The Master reading data from the auxiliary buffer
1: Procedure master.readDataFromAuxBuffer()
2: \[\]
3: \[\text{while true do} \]
4: \[\text{master.consumeData(auxBuffer.read(auxConsIndex))} \]
5: \[\text{if not auxBuffer.next(auxConsIndex)} \text{ then} \]
6: \[\text{auxConsIndex++} \]
7: \[\text{break} \]
8: \[\text{end if} \]
9: \[\text{auxConsIndex++} \]
10: \[\text{end while} \]
Figure 4 depicts the primary buffer with some slots linked to the auxiliary buffer. In particular, the slot pointed by \( M.\text{consIndex} \) is associated with two data of the auxiliary buffer, i.e., the one pointed by
auxConsIndex and the following one. This second slot has then the next flag reset, in order to indicate that the next slot is not linked with the current slot in the primary buffer. Instead, the next token in the primary buffer is not associated with the secondary buffer (the aux flag is reset), while the third slot contains data dropped by the Worker, although it is linked to three data in the auxiliary buffer. In other words, the configuration in which aux == 1 and dropped == 1 is valid and it enables to completely replace a packet with a new one.
4. Formal verification
Assessing the correctness of an algorithm is often not straightforward, hence we built an abstract model of the Master with a single Worker in order to formally check some fundamental properties. We do not consider a plurality of Workers because the interaction between the Master and a Worker is independent of the interaction with any other Worker, hence this approach is sufficient to demonstrate the correctness of the whole system. In particular, we only focus on the primary buffer, as its operation is one of the main contributions of our work and hence it needs a proof of correctness. Instead, the auxiliary buffer is not explicitly verified as it is managed as a standard producer/consumer system, which has been already studied and validated in the existing literature [30].
The model of our algorithm has been developed in Promela [31], a well-known modeling language that, in conjunction with the SPIN [32] model checker generator, can be used to formally verify distributed and concurrent software against certain desired properties. The main purpose of the model checking technique is to explore all the possible states of a system and verify whether the specified properties hold in each execution path. Whenever the model checker finds an execution path that leads to a property violation, it provides the full counter-example with all the steps needed to reach the undesired behavior.
When creating an accurate model of the system, it is very important to keep the nature of the problem tractable, as model checking verification tools tend to exploit a massive amount of memory (state-space explosion problem). Therefore, the actual model of the data exchange mechanism has been built by omitting some implementation details that are not relevant for the analyzed properties, in order to reduce the overall number of states. This is possible because many system states (or runs) are mapped to the same abstract state (or run). A more detailed description of our model will be provided in Section 4.2.
4.1. Properties specification
Given the structure of our algorithm, we can identify six properties that must be always satisfied. The first two properties refer to conditions on some key variables that must be verified to guarantee that no slots will be erroneously overwritten, formally defining regions of the buffer that are “owned” by one of the two modules (the Master and the Worker) at a given time.
**Property 1.** \( W.\text{prodIndex} \) must never exceed \( M.\text{prodIndex} \).
\( M.\text{prodIndex} \) indicates the first empty position in the primary buffer that must be filled by the Master. Hence, it represents a boundary that the Worker must never pass.
**Property 2.** \( M.\text{consIndex} \) must never exceed \( W.\text{prodIndex} \).
\( M.\text{consIndex} \) represents the position of the token being processed by the Master, while \( W.\text{prodIndex} \) identifies the position of the last token that can be processed by the Master.
We also consider two additional safety properties that must be satisfied by the system. Specifically we require that:
**Property 3.** The number of pending tokens delivered by the Master to the Worker and not yet processed by the Worker itself is, at any time, a non-negative integer not exceeding the maximum number of elements that the buffer can store, namely \((N - 1)\), where \( N \) is the total buffer size:
\[0 \leq \text{tokens}\_\text{master}\_\text{to}\_\text{worker} \leq (N - 1)\]
**Property 4.** The number of pending tokens delivered by the Worker to the Master and not yet processed by the Master itself is, at any time, a non-negative integer not exceeding the maximum number of elements
that the buffer can store, namely \((N - 1)\), where \(N\) is the total buffer size:
\[
0 \leq tokens_{worker \to master} \leq (N - 1)
\]
Our circular buffer implementation always leaves at least one empty position, so as to distinguish the cases where the buffer is completely full or completely empty. This is why the actual buffer capacity is \(N\)-1.
Finally, we consider two more properties related to the overall system behavior.
**Property 5. Deadlock absence.**
This property means that neither the Master nor the Worker ever enter an infinite waiting situation.
**Property 6. Livelock absence.**
This last property ensures that some useful work is eventually done by the Master. Here the notion of “useful work” is intended as the Master capability to produce, sooner or later, new tokens for the Worker, e.g., by inserting new data into the buffer. This is an important property verified under the assumption that some fairness constraints are satisfied. Fairness constraints are necessary when the model includes nondeterministic choices, in order to ensure that these choices are made in a fair way; this means that, in each run, it must not be possible that the nondeterministic choice always gives the same result. For example, the scheduling of processes is modeled by Spin using nondeterminism. As usual, we assume the process scheduler always gives the possibility to both the Master and the Worker to execute, sooner or later, some instructions. This is a reasonable hypothesis since most modern execution environments implement scheduling algorithms to avoid process starvation.
Details about how these properties have been specified in Promela are provided in section 4.2.4.
### 4.2. Model details
#### 4.2.1. The primary buffer
Our abstract model does not require the modeling of realistic data into the buffer, but only the modeling of the buffer status. Consequently, only the indexes, the maximum buffer size (i.e. \(N\)), and the actual buffer size (i.e. the number of tokens currently stored in the buffer, denoted `buffer_size` in the Promela model) are modeled.
#### 4.2.2. The semaphore and the functions
The model of the semaphore consists in an asynchronous channel shared between the Master and the Worker. Basically, the *blocking wait* operation is modeled by reading a constant from the channel, while the *signaling* operation is modeled by writing the same constant into the channel. This is a very common
pattern, useful to model various kinds of communication/synchronization primitives between two or more entities.
The functions presented in the pseudocode in Section 3 are modeled as Promela processes, since the language does not provide an explicit way to represent functions and their returned value. We exploit the following well known pattern: the caller sends a token through a synchronous channel shared with the callee in order to pass the control to the invoked process. Then, it performs a *receive* operation on the same channel in order to be awakened from the other end-point when the processing has terminated. Notice that the channel can also be used to pass arguments to and from the called process/function.
4.2.3. The Master and the Worker
The two main entities, the Master and the Worker, are modeled as two independent, concurrently running processes. They share the \texttt{M.prodIndex} and \texttt{W.prodIndex} variables and the channel/semaphore (as stated in our pseudo-code in Section 3.3), in addition to the buffer status variables. In order to decrease the amount of states to be verified by the model checker, and hence reduce the overall verification time to a reasonable value, we use the following abstractions. First, in the if-statement of Algorithm 3, the check on the timestamp value is replaced in the model by a nondeterministic choice, as the whole model does not contain any explicit information about the elapsing of time. Second, the \texttt{timeout()} function that is present in the loop guard (Algorithm 1) is replaced by a non-deterministic choice. This means that, rather than modeling a realistic mechanism to implement a timeout event, we instructed the model checker to extract a random value to decide whether a timeout has occurred or not. Both these abstractions provide a significant performance enhancement without any loss in terms of exhaustiveness of the verification.
4.2.4. The Properties
In properties 1 and 2, the condition that one index must not exceed another index has to be expressed taking into account that, in a circular buffer, indexes are reset to zero when they reach the end of the buffer. For this reason, it would be wrong to simply state property 1 as \texttt{W.prodIndex \leq M.prodIndex}. Actually, after having been reset to zero, \texttt{M.prodIndex} has to be smaller than or equal to \texttt{W.prodIndex} until \texttt{W.prodIndex} is reset to zero too. According to this consideration, this property is expressed by means of a conditional assertion, the condition being a boolean state variable (named \texttt{work_index_M_prod_index_inequality}) that is flipped whenever one of the two counters is reset to zero. The conditional assertion is written in Promela as follows:
\begin{verbatim}
if :: (work_index_M_prod_index_inequality == 0); assert (W.prod_index <= M.prod_index);
:: (work_index_M_prod_index_inequality == 1); assert (W.prod_index >= M.prod_index);
\end{verbatim}
For checking properties 3 and 4 we use a counter that represents the number of outstanding tokens (from Master to Worker for property 3 and from Worker to Master for property 4). This counter is incremented when a new token is produced and decremented when a token is consumed. At each increment or decrement operation, an assertion is introduced in order to check that the counter value remains within its intended range.
Property 5 (deadlock absence) is a built-in property automatically checked by Spin, so no specification is necessary for it.
Properties 1-5 are all safety properties. They can be checked all together by Spin, via a single reachability analysis. Property 6 is instead a liveness property that can be expressed by means of the following Linear Temporal Logic (LTL) formula:
\[(\text{fairness\_constraint}) \Rightarrow \Box((\text{master\_progress} == 0) \Rightarrow (<> (\text{master\_progress} == 1)))\] (1)
This formula takes the form of an implication, where the left hand side (\text{fairness\_constraint}) is another LTL formula specifying the additional fairness constraints\(^2\) that are assumed for the analysis of the right hand side of the implication. The latter is expressed in terms of the \text{master\_progress} boolean variable which tracks the execution of useful work done by the Master. At the beginning of each loop of the master scheduling algorithm, \text{master\_progress} is set to 0, whereas it is set to 1 when the Master produces some new data for the Worker. The property specifies that it is always true (\Box) that, if \text{master\_progress} is 0, eventually (<> ) it will become 1, thus expressing the fact that the Master continually executes useful work (i.e. produces new data).
The additional fairness constraints \text{fairness\_constraint} are related to the two nondeterministic choices used in the model to decide whether a timeout has occurred, and whether the Worker has to be signaled because the tokens produced by the Master have become too old. They are expressed by the following LTL formula, composed of two different sub-constraints:
\[]((\text{scheduling} == 0) \Rightarrow <> (\text{scheduling} == 1)) \&\& \Box((\text{old\_flag} == 0) \Rightarrow <> (\text{old\_flag} == 1))\] (2)
The first sub-constraint involves the boolean variable \text{scheduling} that records, at each master loop, whether the timeout has been triggered (\text{scheduling} = 0) or not (\text{scheduling} = 1). This first constraint specifies
\(^2\)These constraints are in fact in addition to the standard constraint about the scheduling of processes, which can be automatically considered by Spin.
that, starting from any time instant, eventually there will be an iteration of the loop in which the timeout event does not occur. The second constraint is similar, but it involves the boolean variable old_flag, which records the nondeterministic result of the check on the timestamp value in the if-statement of Algorithm 3.
In order to avoid unnecessary complexity, property 6 has been checked separately, in a run where safety properties are not checked. The two full Promela models (one for checking safety properties and the other one for checking property 6) are publicly available in [33].
4.3. Verification results
The model explained above can be exhaustively verified for different values of the main model parameters, as shown in Table 1. For each property, the table specifies the considered range of values for the buffer size, the MASTER_PKT_THRESHOLD and the WORKER_PKT_THRESHOLD. For the sake of scalability of the verification process and without losing in generality, we used rather small values compared to a realistic buffer, which could contain millions of tokens. In fact, possible structural bugs would be detected also in a small size system deployment.
Some inconsistent parameters settings in the considered ranges, such as a threshold greater than the buffer size, are skipped in our verification work. Notice also that, with a buffer size equal to one token, Property 6 is not considered, as the buffer cannot contain any token and therefore the master is not able to perform any useful work. Properties 1-5 are verified even without forcing any fairness constraint, since their satisfaction depend neither on how processes are scheduled, nor on the nondeterministic choices that model the time-related aspects.
In conclusion, the results of our verification process completely demonstrate the correctness of the algorithm from different points of view (absence of indexes misbehavior or accidental packets overwriting, and absence of deadlocks and livelocks). These results can be reproduced using the models available in [33].
5. Implementation
The achievable performance depends not only on design, but also on implementation issues. Then this section presents some implementation choices that can improve performance and scalability of our algorithm, and that have been adopted in our prototype implementation.
Private copies of shared variables. As in many algorithms derived from the producer-consumer problem, also in our case we need to keep two processes in sync by means of a pair of shared variables, one written only by the first process, the other one written only by the second process. Although in this case concurrency issues are limited (no contention can occur because the two processes never try to write the same variable at the same time), the actual implementation on real hardware can introduce additional issues, as shown in MCRingBuffer [17]. In fact, when a first CPU core modifies the content of a variable that is shared with a different CPU core, the entire cache line (64 bytes long on the modern Intel architectures) of the second core containing that variable is invalidated. If the second core needs to read that variable, the hardware has to retrieve this value either from the shared cache (e.g., the L3 in many recent Intel architectures) or from the main memory, with a consequent performance penalty. In our algorithm, this problem occurs for \( M.\text{prodIndex} \), incremented by the Master whenever a new token is inserted into the primary buffer and read by the Worker, and for \( W.\text{prodIndex} \), incremented by the Worker and read by the Master. However, our algorithm is robust enough to operate correctly even if those variables are not perfectly aligned. As a consequence, the Worker creates a private copy of \( M.\text{prodIndex} \) just after waking up, while the Master copies in a private variable the content of \( W.\text{prodIndex} \) before reading data from the shared buffer. The Master and the Worker can perform their operations according to the value of their local copies, which are re-aligned with the actual values only periodically; this does not preclude the correct system operation while ensuring a significant reduction of cache misses.
Shared variables on different cache lines. Because of the same problem mentioned in the previous paragraph (a CPU core can invalidate an entire line of cache in the other cores), our code implements a cache line separation mechanism (similar to MCRingBuffer [17]) that consists in storing each shared variable (possibly extended with padding bytes) on a different cache line. This way, when the Master changes, for instance, the value of \( M.\text{prodIndex} \), the cache line containing \( W.\text{consIndex} \) is not invalidated in the private cache of the core where the Worker is executed.
Alignment with cache lines. In case of a cache miss, the hardware introduces a noticeable latency because of the necessity to transfer the data from the memory to the cache, which happens in blocks of fixed size (the cache line). From that moment, all the memory accesses within that block of addresses are
very fast, as data are served from the L1 cache. In order to minimize the number of cache misses (and
the associated performance penalty), our prototype was engineered to align the most frequently accessed
data so that they span across the minimum set of cache lines. In particular, the starting memory address
of the packet buffers and their slot sizes are multiple of the cache line size; the same technique is used for
minimizing the time for accessing the most important data used in the prototype.
**Use of huge memory pages.** Huge pages are convenient when a large amount of memory is needed,
since they decrease the pressure on the Translation Lookaside Buffer (TLB) for two reasons. First, the
load of virtual-to-real address translation is split across two TLBs (one for huge pages and the other for
normal memory), preventing normal applications (based on normal pages) from interfering with the packet
exchange mechanism (which uses huge pages). Second, they reduce the number of entries in the TLB when
a large amount of memory is needed. We use the huge pages for the shared (primary and auxiliary) buffers;
the drawback is the potential increase of the total memory required by the algorithm because the minimum
size of each buffer increases from 4KB to 2MB.
**Preallocated memory.** Dynamic memory allocation should be avoided during the actual packet pro-
cessing, as this would heavily decrease the performance of the whole system. In our case, all the buffers
used by the packet exchange mechanisms are allocated at the startup of each Worker, allowing the system
to add/remove workers at run-time while at the same time avoiding dynamic memory allocation.
**Emulated timestamp.** Getting the current time is usually rather expensive on standard workstations,
as it requires the intervention of the operating system and, often, an I/O operation involving the hardware
clock. In our case, we emulate the timestamp needed to wake up a Worker when packets are waiting for
service for too long time, by introducing the concept of *current round*, that is the number of loops executed
by the Master in Algorithm 1. As a consequence, our implementation schedules a Worker for service when
there are packets waiting for more than \( N \) rounds, where \( N \) is a number that can be tuned at run-time
based on the expected load on the Master.
**Batch processing.** Batch processing is convenient because it keeps a high degree of code and data
locality, with a positive impact on cache misses. Our prototype implements batch processing whenever
possible, e.g., the Master reads all waiting packets from a Worker before serving the next, and Workers
process all the packets in their queue before suspending themselves. The drawback is the potential increase
of the latency in the data transfer.
**Semaphores.** A simple POSIX semaphore is used to wake up a Worker in case at least \( \text{MASTER\_PACKET\_THRESHOLD} \) packets are queued in the primary buffer, or in case some packets are waiting for long time and then the
timeout expired. Although POSIX semaphores are implemented in kernel space, their impact on performance is acceptable as they are rarely accessed by algorithm design. Instead, no explicit signal is used in the other direction: the shared variables M.consIndex and W.prodIndex are in fact used by the Master to detect the presence of packets that need to be read from the buffer.
**Threading model.** Context switching should be avoided whenever possible because of its cost, particularly when this event happens frequently (such as in packet processing applications, which are usually rather simple and often handle a few packets in a row). For this reason, the Master is a single thread process, cycling on a busy-waiting loop and consuming an entire CPU core, while Workers (which are single-thread processes as well) work in interrupt mode and share the remaining CPU cores. While the Master can be simply parallelized over multiple cores as long as the function chains are not interleaved\(^3\), by design our implementation keeps it locked to a single core. In fact, we would like to allocate the most part of the processing power to the (huge number of) Workers, as they host the network functions that are in charge of the actual (useful, from the perspective of the end users) processing.
6. Experimental results
In order to evaluate performance and scalability of the data exchange algorithm described in Section 3, we carried out several tests on our prototype implementation (that follows the implementation choices described in Section 5) running on a workstation equipped with an Intel i7-3770 @ 3.40 GHz (four CPU cores plus hyperthreading), 16 GB RAM, 16x PCIe bus, a couple of Silicom dual port 10 Gigabit Ethernet NICs based on the Intel x540 chipset (8x PCIe), and Ubuntu 12.10 OS, kernel 3.5.0-17-generic, 64 bits. In all tests, an entire CPU core is dedicated to the Master, while Workers have been allocated on the remaining CPU cores in a way that maximizes the throughput of the system. All the following graphs are obtained by averaging results of 100s tests repeated 10 times.
The data exchanged among the Master and the Workers consists of synthetic network packets of three sizes: 64 bytes to stress the forwarding capabilities of the chain, 700 bytes that matches the average packet size in current networks, and 1514 bytes to stress the data transfer capabilities of the system. We first present a set of experiments where packets exchanged between the Master and the Workers are directly read/written from/to the memory, without involving the network. These tests aim at validating the performance of the algorithm in isolation, without any disturbance such as the cost introduced by the driver used to access to
---
\(^3\)Interleaved chains may introduce additional complexity because multiple Masters may collide when feeding a single Worker. This would then require an extension of our algorithm (no longer lock-free) that is left to a future work.
the NIC or the overhead of the PCIe bus. In these testing conditions, Section 6.3 compares our algorithm against two existing approaches based on the traditional producer/consumer paradigm, which are typically used to exchange packets between the vSwitch and the network functions consolidated on the same server. Particularly, the comparison shows the advantages deriving from both the absence of data copy in the Worker and the blocking operating mode of the Worker itself. Finally, Section 6.6 presents some results involving a real network, where the workstation under test is connected with a second workstation acting as both traffic generator and receiver through two 10Gbps dedicated NICs. This setup allows to derive the precise latency experienced by packets in our middlebox. In this case we use the PF_RING/DNA drivers [26] to read/write packets from/to the NIC, since they allow the Master to send/receive packets without requiring the intervention of the operating system. In addition, data coming from the network is read in polling mode in order to limit additional overheads due to NIC interrupts, and in batches of several packets in order to maximize code locality. Similar techniques are used also when sending data to the network after all the processing took place.
6.1. Single chain - Throughput
This section reports the performance of our algorithm in a scenario where all packets traverse the same chain, which is statically defined. Tests are repeated with chains of different lengths and the measured throughput is provided in graphs that include: (i) a bars view corresponding to the left Y axis that reports the throughput in millions of packets per second; (ii) a point-based representation referring to the right Y axis, which reports the throughput in Gigabits per second.
Figure 5 shows the throughput offered by the function chain in different conditions. These numbers depend both on the design aspects of our algorithm (e.g., no data copy in the Worker, polling model in the Master, blocking model in the Worker, etc.), as well as on the implementation choices we did when implementing the prototype (e.g., data aligned with cache lines, private copies of shared variables, etc., as detailed in Section 5). For instance, the overall throughput of the chain (i.e., the packets/bits that exit from the chain) decreases with the number of Workers because of our choice of reserving the most part of the CPU power to the Workers, hence limiting the Master to a single CPU core (Section 5 - threading model).
Figure 5(a) shows the throughput that could be achieved in ideal conditions, that is: (i) with dummy Workers, i.e., Workers that do not touch the packet data, and (ii) with the Master always reading the same input packet from memory and copying it into the buffer of the first Worker of the chain. This in fact reduces the overall number of CPU cache misses experienced at the beginning of the chain and provides an ideal view of the system, where the penalties due to memory accesses are kept to a minimum. Results reported
(a) Dummy Workers and a single packet in memory.
(b) Real Workers and a single packet in memory.
(c) Dummy Workers and 1M packets in memory.
(d) Real Workers and 1M packets in memory.
Figure 5: Throughput of a single function chain with the algorithm presented in this paper.
in Figure 5(b) are instead gathered in a more realistic scenario, i.e., with Workers that access to the packet content and calculate a simple signature across the first 64 bytes of packets. This may represent a realistic workload, as it emulates the fact that most network applications operate on the first bytes (i.e., the headers) of the packet. This test shows that performance is reduced compared to Figure 5(a) for two reasons: (i) the higher number of cache misses generated by the Workers when accessing to the packet content, and (ii) the additional processing time spent by the Workers for completing their job.
Next tests consider a scenario where the input data for the chain is stored in a buffer containing 1M packets, thus emulating a real middlebox that receives traffic from the network. In particular, Figure 5(c) refers to a scenario with dummy Workers such as in Figure 5(a) and shows how an apparently insignificant different memory access pattern can dramatically change the throughput. In fact, the Master experiences frequent cache misses when reading packets at the beginning of the chain. This modification alone halves the throughput compared to Figure 5(a), particularly when packets have to traverse chains of limited length. Instead, in case of longer chains, this additional overhead at the beginning is amortized by the cost of the rest of the chain.
Finally, Figure 5(d) depicts a realistic scenario where Workers access the packet content (such as in Figure 5(b)), and the Master feeds the chain by reading data from a large initial buffer (1M packets). Even in this case our algorithm is able to guarantee an impressive throughput, such as about 38 Mpps with 64B packets.
In order to confirm that, with the current workload, the Master represents the bottleneck of the system, Figure 7 shows the internal throughput of the chain, namely the total number of packets moved by the Master in the same test conditions of Figure 5(d). This figure gives an insight of the processing capabilities of the Master, which slightly increase with a growing number of Workers, and proves the effectiveness of our algorithm as the number of packets it processes essentially does not depend on the number of Workers.
6.2. Single chain - Latency
Some architectural and implementation choices, such as working with batches of packets, aim at improving the throughput but may badly affect the latency. For this reason, this section gives an insight about the latency experienced by packets traversing our chains. Measurements are based on the `gettimeofday` Unix system call and, in order to reduce its impact on the system, only sampled packets (one packet out of thousand) have been measured.
Figure 6(a) shows the latency of 64B packets when traversing a function chain consisting of a growing number of Workers, in case of real Workers and 1M packets in memory. As expected, the latency increases
6.3. Single chain - Comparison with other algorithms
This section aims at comparing our data exchange algorithm with two other approaches that could be used to exchange packets between the Master and the Workers, and which represent the baseline algorithms used to evaluate the improvements (in terms of performance) brought by our research. Particularly, the comparison aims at validating the advantages of two important aspects of our algorithm: the absence of a data copy in the Worker, and the blocking operating mode of the Worker itself.
In this respect, we cannot directly compare our algorithm with existing prototypes used in NFV such as VALE [15], OVS [21] and xDPd [22], because they include the overhead of packet classification (e.g., L2 forwarding, Openflow matching), which would affect the performance of the data exchange algorithm. As a consequence, we distilled the fundamental design choices of the most important alternative approaches and we carefully implemented them by using, whenever applicable, the guidelines listed in Section 5 (e.g., shared variables on different cache lines, private copies of shared variables, and more).
The first baseline algorithm is based on the traditional producer/consumer paradigm, in which the Master shares two buffers with each Worker: the first is used by the Master to provide packets to the Worker, while the second operates in the opposite direction. In this case, similarly to our algorithm, only the Master operates in polling mode, while the Worker wakes up when there are packets to be processed. The second baseline algorithm closely follows the processing model suggested by Intel in the DPDK library [11]. Also in
this case two buffers (again based on the traditional producer/consumer paradigm) are shared between the Master and each Worker; however, these buffers contain pointers, which means that the actual data is stored in a shared memory region and never moved between the components of the functions chain (zero-copy). Moreover, both the Master and Workers operate in polling mode. Although this solution neither provides isolation among the Workers, nor limits the CPU consumption, it has been selected as a baseline algorithm to be compared against our proposal because nowadays it represents the “standard” way to move packets in network function chains.
The baseline algorithms are executed in realistic conditions, namely with Workers accessing packets and 1M packets in memory; therefore, obtained results should be compared with numbers reported in Figure 5(d).
As expected, the throughput of the chain drops of about 30% when unidirectional buffers are used, as shown by comparing Figure 5(d) and Figure 8(a). This is mainly due to the operating principles of our primary buffer, which allows the Worker to send back a packet to the Master without moving the packet itself, while the baseline algorithm requires one additional data copy in the Worker.
Instead, the second baseline algorithm slightly outperforms our algorithm until the number of jobs (one Master plus N Workers) is lower than the number of available CPU cores, as evident by comparing Figure 8(b) with Figure 5(d). This is due to the absence of data copies and to the polling-based operating mode implemented in the Workers. However, a stronger performance degradation with respect to our solution (it offers less than 1 Mpps throughput) is noticeable when 8 (or more) Workers are active, because at least two of them have to share the same CPU core.
The second baseline algorithm has also been evaluated in terms of latency introduced on the flowing packets. Similarly to what happens for the throughput, it outperforms our proposal when the number of running jobs is less than the number of CPU cores, as evident by comparing Figure 6(a) and Figure 6(b).

For instance, six chained Workers introduce an average latency of $358\mu s$, against the $784\mu s$ obtained with our algorithm. Instead, in case of more Workers, the average latency of the baseline algorithm reaches $420\text{ms}$, which is definitely not acceptable. This poor performance is due to the fact that many polling processes share the same CPU core. Hence, this solution neither provides isolation among Workers (due to the zero-copy), nor acceptable performance when the number of Workers exceeds the number of available cores, being inappropriate for our objectives.
### 6.4. Single chain - Other tests
Additional tests have been performed in order to evaluate some other aspects of the system.
#### 6.4.1. Threads vs. Processes
Threads appear more convenient than processes because they share the same virtual memory space, while processes have distinct virtual memory spaces. In our system, where the data exchange mechanism requires a shared memory between the Master and a Worker, this could have an impact on both the cache efficiency and the TLB behavior and, consequently, on the overall performance of the system. With respect to the former, two processes sharing the same physical memory address have two virtual addresses, which requires two entries in the L1/L2 caches\(^4\). On the contrary, threads have the same virtual address, hence potentially allow the same cache line to be used by different threads. With respect to the TLB, as the same (virtual) address space is present in many threads, the number of entries in the TLB is reduced as well. Instead, processes are expected to generate an higher number of TLB misses.
---
\(^4\)The L3 cache operates with physical addresses.
In order to guarantee memory isolation among Workers, which is a key point in a multi-tenant NFV node, the Master and all the Workers should be implemented as different processes. Although this suggests a possible performance penalty compared to the thread-based implementation, our experiments dismantle this belief, as the overall performance is definitely similar in both cases. The reason is that the L1/L2 caches are private per each physical core, but the Master and the Workers are usually executed in different cores. Hence, an address already cached by the core executing the Master cannot be already found in the cache of the core executing the Worker, forcing the latter to retrieve that data from the (physically addressed) L3 cache, no matter whether it is a thread or a process. As a consequence, as far as performance is concerned, our system shows no differences between a thread-based and a process-based implementation.
6.4.2. Normal memory vs. huge pages
We also evaluated the impact of our choice of using huge pages (each one consisting of 2MB of memory in our testbed) instead of normal pages (4KB) for the shared buffers. Although it may sound strange, results of the two approaches do not differ significantly in the test scenarios considered so far. This is a consequence of our specific test conditions, where the Master and the Workers use a very little amount of memory in addition to the shared buffers. Hence, we repeated the test with Workers executing a deep packet inspection algorithm based on a Deterministic Finite Automata (DFA), which requires a huge amount of memory to store the DFA used to recognize the given patterns into the packets. In this case, the adoption of the huge pages for the shared buffer results in roughly a 10% improvement in terms of throughput.
6.5. Multiple chains
While previous tests focused on packets traversing a growing number of functions all belonging to the same chain, this section evaluates the case when multiple function chains are executed in parallel and each packet traverses only one of them. This significantly stresses the CPU cache, as (i) the Master has to receive packets from a high number of buffers, and (ii) the packets read by the Master are likely to be copied in different buffers for the next processing step.
Data read from the initial memory buffer (containing 1M packets) is provided, in a round robin fashion, to a growing number of function chains, each one composed of two Workers. During the tests, each Worker is involved in two chains meaning that, when 1000 Workers are deployed, packets are spread across 1000 different function chains. Workers are allocated among six CPU cores in a way that minimizes the number of times a packet has to be moved from one core to another, in order to limit CPU cache synchronization operations among cores (Section 5).
Obtained results are shown in Figure 9. As in the previous tests, these numbers are due to the combined effect of the choices we did when designing our algorithm (Section 3) and of the implementation guidelines followed to efficiently implement the prototype (Section 5).
Figure 9(a) provides the overall throughput measured at the end of all the chains, which smoothly decreases with the increment of the number of chains. Notably, it is equal to several Gbps also with 1000 chains in the system, thus confirming the effectiveness of our algorithm. Figure 9(b) shows instead the cumulative distribution of the latency experienced by 64B packets traversing the chains, which ranges from an average value of 80µs in case of 10 function chains, to an average value of 3.8ms when 1000 chains are active.
6.6. Network tests
This section evaluates our algorithm in a real deployment scenario, i.e., when executed on a workstation that receives/sends traffic from the network. In this case the overall performance of the system depends on the algorithm, on the implementation choices done when developing the prototype, as well as on additional aspects such as the driver used for accessing the NIC. Anyway, these results provide an insight of the behavior of the algorithm when used in the context it was designed for.
The throughput obtained in this scenario, whose testing conditions are the same as those of Figure 5(d), is depicted in Figure 10(a). Results are limited by the speed of the input NIC in several cases, particularly with large packets and (relatively) short chains. With longer chains (i.e., 10 cascading workers) the throughput is even slightly better than what was obtained in Figure 5(d) without the network. This can be due to the fact that real NICs create an input buffer that is much smaller than the 1M packets buffer used in the previous
Figure 10: Results with a function chain of growing length, with the Master accessing to the network.
(a) Throughput.
(b) Latency.
test, hence potentially improving the data locality.
Figure 10(b) shows the cumulative distribution function of the latency introduced by network function chains of different length when traversed by 64B packets. Those numbers, obtained by sending packets at the same rate shown in Figure 10(a), measure the time between the instant in which the packet is scheduled for transmission in the traffic generator, and the time it is received by our testing software in the traffic receiver. In this case we then consider all the time spent by the packet in our middlebox, plus the network latency and the time spent in the traffic generator/receiver after/before hitting our timestamping code. Particularly, reported numbers also include the time that the packet spends in the input buffer before being picked up and sent through the chain by the Master, because of its batch-based reading mode. Our measurements demonstrate that the latency, albeit still acceptable, is about 4-5 times higher than in Figure 6(a).
7. Conclusion
This paper has proposed an efficient way to move data between network functions (the Workers) and a virtual switch module (the Master), in order to implement virtual network function chains. The architecture is based on a different pair of circular buffers shared between the Master and each Worker, and aims at achieving a scalable and high performance system while guaranteeing traffic isolation among the different (huge number of) Workers.
One of the peculiarities of this approach is that, through the primary buffer, data are sent to a Worker and then returned back to the Master for further processing with zero-copy. A form of batching has also been introduced in order to amortize the cost of context switches, while introducing a safeguard mechanism
to avoid packet starvation in case of Workers traversed by a limited amount of traffic. The auxiliary buffer is instead used by the Worker to send new data to the Master.
Formal verification techniques have been applied in order to rigorously prove the absence of deadlocks and livelocks, and also to guarantee that no packet can be accidentally overwritten due to concurrency issues such as race conditions or incorrect use of shared indexes.
Finally, performance and scalability of the proposed solution have been evaluated by means of a wide range of experiments made on a real implementation.
Acknowledgment
This work was conducted within the framework of the FP7 UNIFY project, which is partially funded by the Commission of the European Union. Study sponsors had no role in writing this report. The views expressed do not necessarily represent the views of the authors’ employers, the UNIFY project, or the Commission of the European Union.
References
|
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2695146/180548/sharedBuffer.pdf", "len_cl100k_base": 15982, "olmocr-version": "0.1.50", "pdf-total-pages": 35, "total-fallback-pages": 0, "total-input-tokens": 79319, "total-output-tokens": 19003, "length": "2e13", "weborganizer": {"__label__adult": 0.0003752708435058594, "__label__art_design": 0.0004019737243652344, "__label__crime_law": 0.0003516674041748047, "__label__education_jobs": 0.0010042190551757812, "__label__entertainment": 0.00015103816986083984, "__label__fashion_beauty": 0.0001825094223022461, "__label__finance_business": 0.00045108795166015625, "__label__food_dining": 0.0004315376281738281, "__label__games": 0.0010843276977539062, "__label__hardware": 0.004215240478515625, "__label__health": 0.0007295608520507812, "__label__history": 0.00055694580078125, "__label__home_hobbies": 0.00014328956604003906, "__label__industrial": 0.0007610321044921875, "__label__literature": 0.00030922889709472656, "__label__politics": 0.0003101825714111328, "__label__religion": 0.0006017684936523438, "__label__science_tech": 0.359619140625, "__label__social_life": 8.910894393920898e-05, "__label__software": 0.0198516845703125, "__label__software_dev": 0.60693359375, "__label__sports_fitness": 0.0003437995910644531, "__label__transportation": 0.0009322166442871094, "__label__travel": 0.0003020763397216797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79426, 0.0349]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79426, 0.4104]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79426, 0.92248]], "google_gemma-3-12b-it_contains_pii": [[0, 745, false], [745, 2732, null], [2732, 4500, null], [4500, 7643, null], [7643, 10684, null], [10684, 13489, null], [13489, 15009, null], [15009, 17986, null], [17986, 21105, null], [21105, 23257, null], [23257, 25457, null], [25457, 28089, null], [28089, 30315, null], [30315, 30373, null], [30373, 33056, null], [33056, 34423, null], [34423, 37335, null], [37335, 39791, null], [39791, 42759, null], [42759, 45414, null], [45414, 47476, null], [47476, 50569, null], [50569, 53611, null], [53611, 56599, null], [56599, 59668, null], [59668, 59948, null], [59948, 62869, null], [62869, 64557, null], [64557, 66802, null], [66802, 68519, null], [68519, 71381, null], [71381, 73245, null], [73245, 75167, null], [75167, 79426, null], [79426, 79426, null]], "google_gemma-3-12b-it_is_public_document": [[0, 745, true], [745, 2732, null], [2732, 4500, null], [4500, 7643, null], [7643, 10684, null], [10684, 13489, null], [13489, 15009, null], [15009, 17986, null], [17986, 21105, null], [21105, 23257, null], [23257, 25457, null], [25457, 28089, null], [28089, 30315, null], [30315, 30373, null], [30373, 33056, null], [33056, 34423, null], [34423, 37335, null], [37335, 39791, null], [39791, 42759, null], [42759, 45414, null], [45414, 47476, null], [47476, 50569, null], [50569, 53611, null], [53611, 56599, null], [56599, 59668, null], [59668, 59948, null], [59948, 62869, null], [62869, 64557, null], [64557, 66802, null], [66802, 68519, null], [68519, 71381, null], [71381, 73245, null], [73245, 75167, null], [75167, 79426, null], [79426, 79426, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 79426, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79426, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79426, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79426, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79426, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79426, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79426, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79426, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79426, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79426, null]], "pdf_page_numbers": [[0, 745, 1], [745, 2732, 2], [2732, 4500, 3], [4500, 7643, 4], [7643, 10684, 5], [10684, 13489, 6], [13489, 15009, 7], [15009, 17986, 8], [17986, 21105, 9], [21105, 23257, 10], [23257, 25457, 11], [25457, 28089, 12], [28089, 30315, 13], [30315, 30373, 14], [30373, 33056, 15], [33056, 34423, 16], [34423, 37335, 17], [37335, 39791, 18], [39791, 42759, 19], [42759, 45414, 20], [45414, 47476, 21], [47476, 50569, 22], [50569, 53611, 23], [53611, 56599, 24], [56599, 59668, 25], [59668, 59948, 26], [59948, 62869, 27], [62869, 64557, 28], [64557, 66802, 29], [66802, 68519, 30], [68519, 71381, 31], [71381, 73245, 32], [73245, 75167, 33], [75167, 79426, 34], [79426, 79426, 35]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79426, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
7ff756f2e9012ec876936e562ba9166b87ccfaef
|
Dream and reality: incremental specialization in a commercial operating system
Andrew Black
Charles Consel
Calton Pu
Jonathan Walpole
Crispin Cowan
Follow this and additional works at: http://digitalcommons.ohsu.edu/csetech
Recommended Citation
Black, Andrew; Consel, Charles; Pu, Calton; Walpole, Jonathan; Cowan, Crispin; Autrey, Tito; Inouye, Jon; Kethana, Lakshmi; and Zhang, Ke, "Dream and reality: incremental specialization in a commercial operating system" (1995). CSETech. 31.
http://digitalcommons.ohsu.edu/csetech/31
This Article is brought to you for free and open access by OHSU Digital Commons. It has been accepted for inclusion in CSETech by an authorized administrator of OHSU Digital Commons. For more information, please contact champieu@ohsu.edu.
Dream and Reality:
Incremental Specialization in a Commercial Operating System*
Andrew Black, Charles Consel, Calton Pu, Jonathan Walpole,
Crispin Cowan, Tito Autrey, Jon Inouye, Lakshmi Kethana and Ke Zhang
Technical Report 95-001
Department of Computer Science and Engineering
Oregon Graduate Institute of Science & Technology
March 24, 1995
Abstract
Conventional operating system code is written to deal with all possible system states, and
performs considerable interpretation to determine the current system state before taking action.
A consequence of this approach is that kernel calls which perform little actual work take a
long time to execute. To address this problem, we use specialized operating system code that
reduces interpretation, but still behaves correctly in the fully general case. We show that
specialized operating system code can be generated and bound incrementally as the information
on which it depends becomes available. We extend our specialization techniques to include the
notion of optimistic incremental specialization: a technique for generating specialized kernel code
optimistically for system states that are likely, but not certain, to occur. The ideas outlined in
this paper allow the conventional kernel design tenet of “optimizing for the common case” to
be extended to the domain of adaptive operating systems. We also show that aggressive use
of specialization can produce in-kernel implementations of operating system functionality with
performance comparable to user-level implementations.
We demonstrate that these ideas are applicable in real-world operating systems by describing
a re-implementation of the HP-UX file system. Our specialized read system call reduces the
cost of a single byte read by 50%, and an 8 KB read by 20%, while preserving the semantics of
the HP-UX read call. By relaxing the semantics of the HP-UX read we were able to cut the
cost of a single byte read system call by more than an order of magnitude.
---
*This research is partially supported by ARPA grant N00014-94-1-0845 and grants from the Hewlett-Packard Company.
1 Introduction
Much of the complexity in conventional operating system code arises from the requirement to handle all possible system states. A consequence of this requirement is that operating system code tends to be "generic", performing extensive interpretation and checking of the current environment before taking action. One of the lessons of the Synthesis operating system [15] is that significant gains in efficiency can be made by replacing this generic code with specialized code. The specialized code performs correctly only in a restricted environment, but it is chosen so that this restricted environment is the common case.
By way of example, consider a simplified UNIX File System interface in which open takes a path name and returns an “open file” object. The operations on that object include read, write, close, and seek. The method code for read and write can be specialized, at open time, to read and write that particular file, because at that time the system knows, among other things, which file is being read, which process is doing the reading, the file type, the file system block size, whether the inode is in memory, and if so, its address, etc. Thus, a lot of the interpretation of file system data structures that would otherwise have to go on at every read can be done once at open time. Performing this interpretation at open time is a good idea if read is more common than open, and in our experience with specializing the UNIX file system, loses only if the file is opened for read and then never read.
Through extensive use of this kind of specialization Synthesis achieved improvement in kernel call performance ranging from a factor of 3 to a factor of 56 [15] for a subset of the UNIX system call interface. However, the performance improvements due directly to code specialization were not separated from the gains due to other factors, including the design and implementation of a new kernel in assembly language, and the extensive use of other new techniques such as lock-free synchronization and software feedback.
The work described in this paper examines the benefits of specialization more directly, in the context of a commercial UNIX operating system (HP-UX) [6] and the C programming language. The experiments described here focus on the specialization of the read system call, which retains the standard UNIX semantics. We further extend the work done in Synthesis [15] by showing how specialization can be done incrementally and optimistically.
The remainder of the paper is organized as follows. Section 2 elaborates on the notion of specialization, and defines incremental and optimistic specialization. Section 3 describes the application of specialization to the HP-UX read system call. Section 4 analyses the performance of our implementation. Section 5 compares the dream with reality and discusses the key areas for future research. Related work is discussed in section 6. Section 7 concludes the paper.
2 What is Specialization?
Specialization has its conceptual roots in the field of partial evaluation (PE) [7, 19]. In general, PE takes a program and a list of bindings for some (but not all) of the free variables, and produces a restricted program in which the values for those variables are referenced directly, as constants. PE then does aggressive constant folding and propagation, and dead code elimination. Traditionally, PE has been performed off-line in a single step.
In the read example of Section 1, if the read code is partially evaluated with the invariant that the open file variable is bound to a particular file, then all of the data structure analysis to determine whether the file is local or remote, the device on which it resides, its block size, etc. can be done once at PE time, rather than repeatedly at read time. The fact that the specific open
file object becomes known only at runtime (during \texttt{open}) means that the PE must be performed on-line.
Given a list of invariants, which may be learned either statically or at run-time, a combination of on-line and off-line PE should be capable of generating the required specialized code. For example, the Synthesis kernel [17] performed the (conceptual) PE step just once, at runtime during \texttt{open}. It is in principle possible to apply the on-line partial evaluator again at every point where new invariants become known (i.e., some or all of the points at which more information becomes available about the bindings that the program contains). We call this repeated application of an on-line partial evaluator \textit{incremental specialization} [8].
The discussion so far has considered generating specialized code only on the basis of known invariants, i.e., bindings that are known to be constant. In an operating system, there are many things that are \textit{likely} to be constant for long periods of time, but may occasionally vary. For example, it is \textit{likely} that files will not be shared concurrently, and that reads to a particular file will be sequential. We call these assumptions \textit{quasi-invariants}. If specialized code is generated, and used, on the assumption that quasi-invariants hold most of the time, then performance should improve. However, the system must correctly handle the cases where the quasi-invariants do not hold.
Correctness can be preserved by guarding every place where quasi-invariants may become false. For example, suppose that specialized read code is generated based on the quasi-invariant "no concurrent sharing". A \textit{guard} placed in the \texttt{open} system call could be used to detect other attempts to open the same file concurrently. If the guard is triggered, the \texttt{read} routine must be "unspecialized", either to the completely generic \texttt{read} routine or, more accurately, to another specialized version that still capitalizes on the other invariants and quasi-invariants that remain valid. We call the process of replacing one version of a routine by another (in a different stage of specialization) \textit{replugging}. We refer to the overall process of specializing based on quasi-invariants \textit{optimistic specialization}. Because it may become necessary to replug dynamically, optimistic specialization requires incremental specialization.
If the optimistic assumptions about a program's behavior are correct, the specialized code will function correctly. If one or more of the assumptions become false, the specialized code will break, and it should be replugged. This transformation will be a win if specialized code is executed many times, i.e., if the savings that accrue from the optimistic assumption being right, weighted by the probability that it is right, exceed the additional costs of the replugging step, weighted by the probability that it is necessary (see Section 4 for details).
The discussion so far has described incremental and optimistic specialization as forms of on-line PE. However, in the operating system context, the full cost of code generation must not be incurred at runtime. The cost of runtime code generation can be avoided by generating code \textit{templates} statically and optimistically at compile time. At kernel call invocation time, the templates are simply filled in and bound appropriately.
3 Specializing HP-UX read
To explore the real-world applicability of the techniques outlined above, we applied incremental and optimistic specialization to the HP-UX 9.04 \texttt{read} system call. \texttt{read} was chosen as a test case because it is a frequently used and well-understood piece of code and is representative of many other UNIX system calls. The HP-UX implementation of \texttt{read} is also representative of many other UNIX implementations. Therefore, we expect our results to be applicable to other UNIX-like systems.
3.1 Overview of the HP-UX read Implementation
To understand the nature of the savings involved in our specialized read implementation it is first necessary to understand the basic operations involved in a conventional Unix read implementation. Assuming that the read is from a normal file and that its data is in the buffer cache, the basic steps are as follows [16].
1. System call startup: privileged promotion, switch to kernel stack, and update user structure.
2. Identify the file and file system type: translate the file descriptor number into a file descriptor, then into a vnode number, and finally into an inode number.
3. Lock the inode.
4. Identify the block: translate the file offset value into a logical (file) block number, and then translate the logical block number into a physical (disk) block number.
5. Find the virtual address of the data: find the block in the buffer cache containing the desired physical block and calculate the virtual address of the data from the file offset.
6. Data transfer: Copy necessary bytes from the buffer cache block to the user’s buffer.
7. Process another block?: compare the total number of bytes copied to the number of bytes requested; goto step 4 if more bytes are needed.
8. Unlock the inode.
9. Update the file offset: lock file table, update file offset, and unlock the file table.
10. System call cleanup: update kernel profile information, switch back to user stack, privilege demotion.
The above tasks can be categorized as either interpretation, traversal, locking, or work. Interpretation involves activities such as conditional and case statement execution, and examining parameters and other system state variables to derive a particular value. Traversal is basically a matter of dereferencing and includes function calling and data structure searching. Locking includes all synchronization-related activities. Work is the fundamental task of the call. In the case of the read, the only work is to copy the desired data from the kernel buffers to the user’s buffer.
Ideally, all of the tasks performed by a system call should be in the work category. Unfortunately, in the case of read steps 1, 2, 4, 5, 7, and 10 consist mostly of interpretation and traversal, and steps 3, 8, and most of 9 are locking. Only step 6 and a small part of 9 can be categorized as work.
3.2 Invariants and Quasi-Invariants for Specialization
Our specialized version of read, called is_read, is specialized according to the invariants and quasi-invariants listed in table 1. Only fs_constant is a true invariant; the remainder are quasi-invariants.
The fs_constant invariant states that file system constants such as the file type, file system type, and block size do not change once the file has been opened. This invariant is known to hold because of Unix file system semantics. Based on this invariant, is_read can avoid the traversal costs involved in step 2 above. Our is_read implementation is specialized, at open time, for regular files residing on a local file system with a block size of 8 KB. It is important to realize that the is_read code is enabled, at open time, for the specific file being opened. Reading any other kind of file requires the use of the standard HP-UX read.
It is also important to note that the is_read path is specialized for the specific process performing the open. That is, we assume that the only process executing the is_read code will be the one that performed the open that generated it. The major advantage of this approach is that
Table 1: Invariants
<table>
<thead>
<tr>
<th>(Quasi-)Invariant</th>
<th>Description</th>
<th>Savings</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>FS_CONSTANT</strong></td>
<td>Invariant filesystem parameters.</td>
<td>Avoids step 2.</td>
</tr>
<tr>
<td><strong>NO_FP_SHARE</strong></td>
<td>No file pointer sharing.</td>
<td>Avoids most of step 9 and allows caching of file offset in file descriptor.</td>
</tr>
<tr>
<td><strong>NO_HOLES</strong></td>
<td>No holes in file.</td>
<td>Avoids checking for empty block pointers in inode structure.</td>
</tr>
<tr>
<td><strong>NO_inode_SHARE</strong></td>
<td>No inode sharing.</td>
<td>Avoids steps 3 and 8.</td>
</tr>
<tr>
<td><strong>NO_USER_LOCKS</strong></td>
<td>No user-level locks.</td>
<td>Avoids having to check for user-level locks.</td>
</tr>
<tr>
<td><em>READ_ONLY</em></td>
<td>No writers.</td>
<td>Allows optimized end of file check.</td>
</tr>
<tr>
<td><em>SEQUENTIAL_ACCESS</em></td>
<td>Calls to <strong>is_read</strong> inherit file offset from previous <strong>is_read</strong> calls</td>
<td>For small reads, avoids steps 2, 3, 4, 5, 7, 8, 9.</td>
</tr>
</tbody>
</table>
A private per-process per-file read call has well-defined access semantics: reads are sequential by default.
Specializations based on the quasi-invariant _SEQUENTIAL_ACCESS_ can have huge performance gains. Consider a sequence of small (say 1 byte) reads by the same process to the same file. The first _read_ performs the interpretation, traversal and locking necessary to locate the the kernel virtual address of the data it needs to copy. At this stage it can specialize the next _read_ to simply continue copying from the next virtual address, avoiding the need for any of the steps 2, 3, 4, 5, 7, 8, and 9. This specialization is predicated not only on the _SEQUENTIAL_ACCESS_ and _NO_FP_SHARE_ quasi-invariants, but also on other quasi-invariants such as the assumption that the next _read_ won’t cross a buffer boundary, and the buffer cache replacement code won’t have changed the data that resides at that virtual memory address. The next section shows how these assumptions can be guarded.
The _NO_HOLES_ quasi-invariant is also related to the specializations described above. Contiguous sequential reading can be specialized down to contiguous byte-copying only for files that don’t contain holes, since hole traversal requires the interpretation of empty block pointers in the inode.
The _NO_inode_SHARE_ and _NO_FP_SHARE_ quasi-invariants allow exclusive access to the file to be assumed. This assumption allows the specialized _read_ code to avoid locking the inode and file table in steps 3, 8, and 9. They also allow the caching (in data structures associated with the specialized code) of information such as the file pointer. This caching is what allows all of the interpretation, traversal and locking in steps 2, 3, 4, 5, 8 and 9 to be avoided.
In our current implementation, all invariants are validated in open, when specialization happens. A specialized read routine is not generated unless all of the invariants hold.
### 3.3 Guards
Since specializations based on quasi-invariants are optimistic, they must be adequately guarded. Guards detect the impending invalidation of a quasi-invariant and invoke the replugging routine (section 3.4) to unspecialize the _read_ code. Table 2 lists the quasi-invariants used in our implementation and the HP-UX system calls that contain the associated guards.
Quasi-invariants such as read-only and no holes can be guarded in open since they can only be violated if the same file is opened for writing. The other quasi-invariants can be invalidated during other system calls which either access the file using a file descriptor from within the same or a child process, or access it from other processes using system calls that name the file using a pathname. For example, no fp share will be invalidated if multiple file descriptors are allowed to share the same file pointer. This situation can arise if the file descriptor is duplicated locally using dup, if the entire file descriptor table is duplicated using fork, or if a file descriptor is passed through a Unix domain socket using sendmsg. Similarly, sequential access will be violated if the process calls lseek or readv.
The guards in system calls that use file descriptors are relatively simple. The file descriptor parameter is used as an index into a per-process table; if a specialized file descriptor is already present then the quasi-invariant will become invalid, triggering the guard and invoking the replugger. For example, the guard in dup only responds when attempting to duplicate a file descriptor used by is_read. Similarly, fork checks all open file descriptors and triggers replugging of any specialized read code.
Guards in calls that take pathnames must detect collisions with specialized code by examining the file's inode. We use a special flag in the inode to detect whether a specialized code path is associated with a particular inode.
Two quasi-invariants discussed in section 3.2, but not listed in table 2 are the assumption that cached buffers are not replaced between calls to is_read and the assumption that successive calls to is_read hit the same buffer. The first of these quasi-invariants can be guarded by altering the buffer cache replacement strategy slightly. The second is “guarded” explicitly using interpretation code in the fast-path code.
With the exception of lseek, triggering any of the guards discussed above causes the read code to be replugged back to the general purpose implementation. lseek is the only instance of respecialization in our implementation; when triggered, it simply updates the file offset in the specialized read code.
To guarantee that all invariants and quasi-invariants hold, open checks that the vnode meets all the fs constant and no holes invariants and that the requested access is only for read. Then the inode is checked for sharing. If all invariants hold during open then the inode and file descriptor are marked as specialized and an is_read path is set up for use by the calling process on that file. Setting up the is_read path amounts to allocating a private per-file-descriptor data structure for use by the is_read code which is sharable. The inode and file descriptor markings activate all of the guards atomically since the guard code is permanently present.
Table 2: Guards
<table>
<thead>
<tr>
<th>Quasi-Invariant</th>
<th>HP-UX system calls that may invalidate invariant</th>
</tr>
</thead>
<tbody>
<tr>
<td>no fp share</td>
<td>creat, dup, dup2, fork, sendmsg,</td>
</tr>
<tr>
<td>no holes</td>
<td>open</td>
</tr>
<tr>
<td>no node share</td>
<td>creat, fork, open, truncate</td>
</tr>
<tr>
<td>no user locks</td>
<td>lckf, fcntl</td>
</tr>
<tr>
<td>read only</td>
<td>open</td>
</tr>
<tr>
<td>sequential access</td>
<td>lseek, readv</td>
</tr>
</tbody>
</table>
Quasi-invariants such as read-only and no holes can be guarded in open since they can only be violated if the same file is opened for writing. The other quasi-invariants can be invalidated during other system calls which either access the file using a file descriptor from within the same or a child process, or access it from other processes using system calls that name the file using a pathname. For example, no fp share will be invalidated if multiple file descriptors are allowed to share the same file pointer. This situation can arise if the file descriptor is duplicated locally using dup, if the entire file descriptor table is duplicated using fork, or if a file descriptor is passed through a Unix domain socket using sendmsg. Similarly, sequential access will be violated if the process calls lseek or readv.
The guards in system calls that use file descriptors are relatively simple. The file descriptor parameter is used as an index into a per-process table; if a specialized file descriptor is already present then the quasi-invariant will become invalid, triggering the guard and invoking the replugger. For example, the guard in dup only responds when attempting to duplicate a file descriptor used by is_read. Similarly, fork checks all open file descriptors and triggers replugging of any specialized read code.
Guards in calls that take pathnames must detect collisions with specialized code by examining the file's inode. We use a special flag in the inode to detect whether a specialized code path is associated with a particular inode.
Two quasi-invariants discussed in section 3.2, but not listed in table 2 are the assumption that cached buffers are not replaced between calls to is_read and the assumption that successive calls to is_read hit the same buffer. The first of these quasi-invariants can be guarded by altering the buffer cache replacement strategy slightly. The second is "guarded" explicitly using interpretation code in the fast-path code.
With the exception of lseek, triggering any of the guards discussed above causes the read code to be replugged back to the general purpose implementation. lseek is the only instance of respecialization in our implementation; when triggered, it simply updates the file offset in the specialized read code.
To guarantee that all invariants and quasi-invariants hold, open checks that the vnode meets all the fs constant and no holes invariants and that the requested access is only for read. Then the inode is checked for sharing. If all invariants hold during open then the inode and file descriptor are marked as specialized and an is_read path is set up for use by the calling process on that file. Setting up the is_read path amounts to allocating a private per-file-descriptor data structure for use by the is_read code which is sharable. The inode and file descriptor markings activate all of the guards atomically since the guard code is permanently present.
3.4 The Replugging Algorithm
Replugging components of an actively running kernel is a non-trivial problem that requires a paper of its own. The problem is simplified here for two reasons. First, our main objective is to test the feasibility and benefits of specialization. Second, specialization has been applied to the replugging algorithm itself. For kernel calls, the replugging algorithm should be specialized, simple, and efficient.
The first problem to be handled during replugging is synchronization. If a replugger were executing in a single-threaded kernel with no system call blocking in the kernel, then no synchronization would be needed. Our environment is a multiprocessor, where kernel calls may be suspended. Therefore, the replugging algorithm must handle two sources of concurrency: (1) interactions between the replugger and the process whose code is being replugged and (2) interactions among other kernel threads that triggered a guard and invoked the replugging algorithm at the same time. To simplify the replugging algorithm, we make two assumptions that are true in many Unix systems: (A1) kernel calls cannot abort, so we do not have to check for an incomplete kernel call to is_read, and (A2) there is only one thread per process, so multiple kernel calls cannot concurrently access process level data structures.
The second problem that a replugging algorithm must solve is the handling of executing threads inside the code being replugged. We assume (A3) that there can be at most one thread executing inside specialized code. This is the most important case, since in all cases so far we have specialized for a single thread of control. This assumption is particularly easy to justify in Unix environments. To separate the simple case (when no thread is executing inside code to be replugged) from the complicated case (when one thread is inside), we use a "inside-flag". The first instruction of the specialized read code sets the inside-flag to indicate that a thread is inside. The last instruction in the specialized read code clears the inside-flag.
To simplify the synchronization of threads during replugging, the replugging algorithm uses a queue, called the holding tank, to stop the thread that happens to invoke the specialized kernel call while replugging is taking place. Upon completion of replugging, the algorithm activates the thread waiting in the holding tank. The thread then resumes the invocation through the unspecialized code.
For simplicity, we describe the replugging algorithm as if there were only 2 cases—specialized and non-specialized. The paths take the following steps:
1. Check the file descriptor to see if this file is specialized. If not, branch out of the fast path.
2. Set inside-flag.
3. Branch indirect. This branch leads to either the holding tank or the read path. It is changed by the replugger.
Read Path:
1. Do the read work.
2. Clear inside-flag.
Holding Tank:
2. Sleep on the per-file lock to await replugger completion.
3. Jump to standard read path.
Replugging Algorithm:
1. Acquire per-process lock to block concurrent repluggers. It may be that some guard was triggered concurrently for the same file descriptor, in which case we are done.
2. Acquire per-file lock to block exit from holding tank.
3. Change the per-file indirect pointer to send readers to the holding tank (changes action of the reading thread at step 3 so no new threads can enter the specialized code).
4. Spinwait for the per-file inside-flag to be cleared. Now no threads are executing the specialized code.
5. Perform incremental specialization according to which invariant was invalidated.
6. Set file descriptor appropriately, including indicating that the file is no longer specialized.
7. Release per-file lock to unblock thread in holding tank.
8. Release per-process lock to allow other repluggers to continue.
The way the replugger synchronizes with the reader thread is through the inside-flag in combination with the indirection pointer. If the reader sets the inside-flag before a replugger sets the indirection pointer then the replugger waits for the reader to finish. If the reader takes the indirect call into the holding tank, it will clear the inside-flag which will tell the replugger that no thread is executing the specialized code. Once the replugging is complete the algorithm unblocks any thread in the holding tank and they resume through the new unspecialized code.
In most cases of unspecialization, the general case, read, is used instead of the specialized is_read. In this case, the file descriptor is marked as unspecialized and the memory is read occupies is marked for garbage collection at file close time.
3.5 Cost/Benefit Analysis
Specialization reduces the execution costs of the fast path, but it also requires additional mechanisms, such as guards and replugging algorithms, to maintain system correctness. By design, guards are located in low frequency execution paths and in the rare case of quasi-invariant invalidation, replugging is performed. We have also added code to open to check if specialization is possible, and to close to garbage collect the specialized code after replugging. An informal performance analysis of these costs and a comparison with the gains is:
\[
Overhead = \sum_i f_{syscall}^i \cdot Guard^i + Open + Close + f_{Replug} \cdot Replug
\]
\[\text{Overhead} + f_{is} \cdot \text{is.read} < (f_{TotalRead} - f_{is}) \cdot \text{read}_{HP-UX} \]
In equation 1, Overhead includes the cost of guards, the replugging algorithm, and the increase due to initial invariant validation, specialization and garbage collecting for all file opens and closes. Each Guard^i (in different kernel calls) is invoked f_{syscall}^i times. Similarly, Replug is invoked f_{Replug} times. A small part of the cost of synchronization with the replugger is born by is_read (the setting and resetting of inside-flag), but overall is_read is much faster than read (Section 4).
In equation 2, f_{is} is the number of times specialized is_read is invoked and f_{TotalRead} is the total number of invocations to read the file. Specialization wins if the inequality 2 is true.
Table 3: HP-UX read versus is_read using Benchmark 1 (in CPU cycles)
<table>
<thead>
<tr>
<th>Experiment</th>
<th>1 byte read</th>
<th>8K 1-byte read</th>
<th>8 KB read</th>
<th>64 KB read</th>
</tr>
</thead>
<tbody>
<tr>
<td>read</td>
<td>2991</td>
<td>25,568,196</td>
<td>6081</td>
<td>38,988</td>
</tr>
<tr>
<td>is_read</td>
<td>1513</td>
<td>12,380,884</td>
<td>4833</td>
<td>33,890</td>
</tr>
<tr>
<td>read:is_read ratio</td>
<td>2:1</td>
<td>2.1:1</td>
<td>1.3:1</td>
<td>1.2:1</td>
</tr>
<tr>
<td>is_read (normalized)</td>
<td>0.51</td>
<td>0.48</td>
<td>0.79</td>
<td>0.87</td>
</tr>
</tbody>
</table>
4 Performance Results
The following sections outline a series of microbenchmarks to measure the performance of the incrementally and optimistically specialized read fast path, as well as the overhead associated with guards and replugging. All of the experiments were run with a warm buffer cache in order to prevent device I/O costs from dominating the results. The use of specialization to optimize the device I/O path and make better use of the buffer cache is the subject of a separate study currently underway in our group.
The experimental environment for the benchmarks was a Hewlett-Packard 9000 series 800 G70 (9000/887) [1] dual-processor server running in single-user mode. This server is configured with 128 MB of RAM. The two PA7100 [9] processors run at 96 MHz and each contains one MB of instruction cache and one MB of data cache.
4.1 Performance of the read Fast Path
The first microbenchmark is designed to measure best case read performance. The program consists of a tight loop that opens the file, gets a timestamp, reads N bytes, gets a timestamp, and closes the file. Timestamps are obtained by reading the PA-RISC’s interval timer, a processor control register that is incremented every processor cycle [12].
The benchmark result is best case in the sense that it makes optimal use of the processor’s data cache during copyout() by choosing the target of the read to be a page-aligned user buffer whose addresses do not conflict in the processor’s data cache with those of the file system’s buffer block.
Table 3 compares the performance of HP-UX read with is_read for reads of one byte, 8 KB, and 64 KB. In all cases, is_read performance is better than HP-UX read. For single byte reads, is_read takes only half the time of HP-UX read. For larger reads, the performance gain is not so large because the overall time is dominated by data copying costs. However, even reads of 64 KB improve by about 13%.
The results presented in table 3 are from operations performed in a controlled environment with minimal memory effects. However, they are not indicative of normal operations where reads from files are sequential, using multiple file system buffers. To address this restriction, the second microbenchmark reads a 5 MB file sequentially using fixed sized reads to the same page-aligned user buffer. Before running the benchmark, the file is searched to load it into the buffer cache. The benchmark ensures that the file is not present in the processor’s data cache by zero-filling a 1 MB user buffer before opening the file. Figure 1 illustrates the results of this benchmark for HP-UX read and is_read using 8 KB and 64 KB reads.
There are two things to notice in figure 1. First, using median values, the performance improvement of is_read over HP-UX read has dropped by about 13% for the 8 KB case, and 7% for the 64 KB case. The reduction in improvement compared to the first benchmark is due to the uniformly increased cost of the copyout operation which is caused by less favorable cache conditions.
Second, the step at one MB (128th 8 KB read and 16th 64 KB read), resulting from our zero-fill approach to removing the file contents from the data cache. Zero-filling the buffer also fills the data cache with “dirty” data, which requires memory writeback. After the first one MB no more writeback is required.
4.2 The Cost of the Initial Specialization
The performance improvements in the read fast path come at the expense of overhead in other parts of the system. The most significant impact occurs in the open system call, which is the point at which the specialized read path is generated. open has to check 8 predicates for a total of about 90 instructions and a lock/unlock pair. If specialization can occur it needs to allocate some kernel memory and fill it in. close needs to check if the file descriptor is or was specialized and if so, free the kernel memory. A kernel memory alloc takes 119 cycles and free takes 138 cycles.
The impact of this work is that the new specialized open call takes 5582 cycles compared to 5227 cycles for the standard HP-UX open system call. In both cases, no inode traversal is involved. As expected, the cost of the new open call is higher than the original. However, notice that the increase in cost is small enough that a program that opens a file and reads it once can still benefit from specialization.
4.3 The Cost of Nontriggered Guards
The cost of guards can be broken down into two cases: the cost of executing them when they are not triggered, and the cost of triggering them and performing the necessary replugging. This sub-section is concerned with the first case.
Guards are associated with each of the system calls shown in table 2. As noted elsewhere, there are two sorts of guards. One checks for specialized file descriptors and is very cheap, the other for specialized inodes. Since inodes can be shared they must be locked to check them. The lock expense is only incurred if the file passes all the other tests first. A lock/unlock pair takes 145 cycles. A guard requires 2 temporary registers, 2 loads, an add, and a compare, 11 cycles, and then
a function call if it is triggered. It is important to note that these guards do not occur in the data transfer system calls, except for `readv` which is not frequently used.
In the current implementation, guards are fixed in place (and always perform checks) but they are triggered only when specialized code exists. Alternatively, guards could be inserted in-place when associated specialized code is generated. Learning which alternative performs better requires further research on the costs and benefits of specialization mechanisms.
### 4.4 The Cost of Replugging
There are two costs associated with replugging. One is the overhead added to the fast path in `is_read` for checking if it is specialized and calling `read` if not, and for writing the inside-flag bit twice, and the indirect function call with zero arguments otherwise. A timed microbenchmark shows this cost to be 35 cycles.
The second cost of replugging is incurred when the replugging algorithm is invoked. This cost depends on whether there is a thread already present in the code path to be replugged. If so, the elapsed time taken to replug can be dominated by the time taken by the thread to exit the specialized path. The worst case for the `read` call occurs when the thread present in the specialized path is blocked on I/O. We are working on a solution to this problem which would allow threads to “leave” the specialized code path when initiating I/O and rejoin a replugged path when I/O completes, but this solution is not yet implemented.
In the case where no thread is present in the code path to be replugged, the cost of replugging is determined by the cost of acquiring two locks, one spinlock, checking one memory location and storing to another (to get exclusive access to the specialized code). To fall back to the generic `read` takes 4 stores plus address generation, plus storing the specialized file offset into the system file table which requires obtaining the File Table Lock and releasing it. After incremental specialization two locks have to be released. An inspection of the generated code shows the cost to be about 535 cycles assuming no lock contention. The cost of the holding tank is not measured since that is the rarest subcase and it would be dominated by spinning for a lock in any event.
Adding up the individual component costs, and multiplying them by the frequency, we can estimate the guarding and replugging overhead attributed to each `is_read`. Assuming that 100 `is_read` happen for each of guarded kernel calls (`fork`, `creat`, `truncate`, `open`, `close` and replugging), less than 10 cycles are added as guarding overhead to each invocation of `is_read`.
### 5 Discussion
#### 5.1 The Dream vs. The Reality
The experimental results described in Section 4 show the performance of our current `is_read` implementation. At the time of writing this implementation was not fully specialized: some invariants were not used and, as a result, the measured `is_read` path contains more interpretation and traversal code than is absolutely necessary. Therefore, the performance results presented above are conservative. Even so, the results show that optimistic specialization can improve the performance of both small and large reads.
At one end of the spectrum, assuming a warm buffer cache, the performance of small reads is dominated by control flow costs. Through specialization we are able to remove, from the fast path, a large amount of code, concerned with interpretation, data structure traversal and synchronization. Hence, it is not surprising that the cost of small reads is reduced significantly.
At the other end of spectrum, again assuming a warm buffer cache, the performance of large reads is dominated by data movement costs. Our experimental results show that byte copying costs are in turn dominated by cache effects. Specialization can reduce the number of cache conflicts between the source and destination of byte copies that occur in sequential reads to the same user buffer. We are working on specializing the file system’s buffer allocation code to choose buffer cache blocks that avoid conflicts with previous read buffers.
In both cases, the use of specialization removes overhead from the fast path by adding overhead to other parts of the system: specifically, the places at which the specialization, replugging and guarding of optimistic specializations occur. Our experience has shown that generating specialized implementations is easy. The real difficulty arises in correctly placing guards and making policy decisions about what and when to specialize and replug. In existing kernels, guards are difficult to place correctly because it is non-trivial to identify all of the places that the optimistic specialization depends on. This problem is due, in part, to the lack of encapsulation in programming languages such as C. We are currently working on a restricted C programming language, called C++, and a set of associated tools to help solve these problems. Ultimately, automatic guard placement requires new programming language and compiler technology.
Similarly, the choice of what to specialize, when to specialize, and whether to specialize optimistically are all non-trivial policy decisions. In our current implementation we made these decisions in an ad hoc manner, based on our expert knowledge of the system implementation, semantics and common usage patterns. A more systematic approach would require, at the very least, some accurate profiling information to determine when the savings due to a potential specialization will exceed its associated guarding and replugging costs.
5.2 Interface Design and Kernel Structure
From early in the project, our intuition told us that, in the most specialized case, it should be possible to reduce the cost of a read system call that hits in the buffer cache. It should be little more than the basic cost of data movement from the kernel to the application’s address space, i.e., the cost of copying the bytes from the buffer cache to the user’s buffer. In practice, however, our specialized read implementation costs considerably more than copying one byte. The cost of our specialized read implementation is 1513 cycles, compared to approximately 235 cycles for entering the kernel, fielding the minimum number of parameters, and carefully copying a single byte out to the application’s address space.
Upon closer examination, we discovered that the remaining 1278 cycles were due in part to constraints that were placed upon our design by an over-specification of the UNIX read implementation. For example, the need to always support statistics-gathering facilities such as ptrace and times requires every read call to record the time it spends in the kernel. Another example is the constraint that data has to be delivered to a buffer in the application’s address space rather than a register. This forces the read call to incur significant costs associated with a careful copyout that ensures that page-faults and security violations do not occur while executing the copy in kernel mode. For reads of only a single byte, a more sensible implementation would return the data in a register in much the same way as the stdio library `getc` call.
To push the limits of a kernel-based read implementation, we implemented a special one-byte read system call, called `readc`, which returns a single byte in a register, just like the stdio library.
\footnote{On processors with virtually indexed caches, such as the PA-RISC, conflicts depend on the choice of virtual addresses for the source and target of the copy. On processors with physically indexed caches, conflicts depend on the choice of physical addresses for the source and target of the copy.}
getc call. In addition to the optimizations used in our specialized isread call, readc avoids switching stacks, omits ptrace support, and skips updating profile information. The performance of the resulting readc implementation is 64 cycles. Notice that aggressive use of specialization can lead to a readc system call that performs within a factor of two of a pure user-level getc which costs 38 cycles in HP-UX's stdio library. This result is encouraging because it shows the feasibility of implementing operating system functionality at kernel level with performance similar to user-level libraries. Aggressive specialization may render unnecessary the popular trend of duplicating operating system functionality at user level [2, 11] for performance reasons.
Another commonly cited reason for moving operating system functionality to user level is to give applications more control over policy decisions and operating system implementations. We believe that these benefits can also be gained without duplicating operating system functionality at user level. Following an open-implementation (OI) philosophy [13], operating system functionality can remain in the kernel, with customization of the implementation supported in a controlled manner via meta interface calls [1-4]. A strong lesson from our work and from other work in the OI community [13] is that abstractly specified interfaces, i.e., those that do not constrain implementation choices unnecessarily, are key to the gaining the most benefit from techniques such as specialization.
6 Related Work
There are several other projects and approaches that are “adaptive” in some sense. In the operating systems area, the SPIN kernel [4] at the University of Washington is a good example. SPIN allows applications to dynamically load executable modules, called spindles, into the kernel. These spindles are written in a type-safe programming language to ensure that they do not affect adversely kernel operations. SPIN allows applications to extend the OS kernel interface in a custom fashion through co-existence, while incremental specialization extends kernel interfaces only through meta interfaces, keeping the applications at the user level.
The Flex project [5] at University of Utah is building the Mach 4 microkernel using specialization techniques. Synthetix and Flex are complementary in their goals. Flex needs to implement a production quality Mach microkernel. Synthetix is developing tools and methodology that apply to a wide range of environments, including HP-UX and Mach as primary demonstration systems.
A third significant OS project aiming at adaptiveness is the Substrate Object Model [3] at University of Notre Dame. They propose to use customizable objects to implement extensible and flexible kernel services. Substrates are currently being used to extend the AIX operating system. They use a combination of substrates, efficient cross-domain RPC based on shared virtual memory, and extended the AIX dynamic loader to load subclasses into the kernel.
The Apertos operating system [20] supports objects and meta-objects explicitly. Apertos supports dynamic reconfiguration by moving an object into a new meta-space. An object's behavior can be modified by its meta objects, including kernel objects. Up to now, Apertos has not used specialization to improve its performance.
Other examples of related systems include: the Chorus/MiX commercial operating system [18], which has specialized execution paths, and the Kernel ToolKit project at Georgia Tech [10] which supports online and off line object reconfiguration, and of course, Synthesis [17, 15], which was discussed in the Introduction.
7 Conclusions
This paper has introduced the concepts of incremental and optimistic specialization. These concepts refine previous work on kernel optimization using dynamic code generation in Synthesis [17, 15], and can be applied to commercial operating system kernels.
We have demonstrated incremental and optimistic specialization in an experiment on the Unix File System of HP-UX, a commercial operating system. The experimental results show that significant performance improvements can be gained for three representative cases: 50% for 1-byte read, 20% for 8K-byte reads, and and 13% for 64K-byte reads.
An important finding in our experiments is the significant cost of guaranteeing the correctness of specialized code. Defining the invariants and quasi-invariants that allow specialization, and using them to specialize kernel code, turned out to be relatively easy. Creating and inserting the appropriate guards that detect the invalidation of quasi-invariants required a significant amount of effort.
Our experience shows the promise of incremental and optimistic specialization. However, before this approach can become pervasive, a more clearly defined programming methodology and support tools are needed. These are the topic of our current research. Fully automated incremental specialization is still a long way off and requires new programming language technology and partial evaluation tools.
8 Acknowledgements
Bill Trost of Oregon Graduate Institute and Takaichi Yoshida of Kyushu Institute of Technology performed the initial study of the HP-UX read and write system calls, identifying many quasi-invariants to use for specialization. Bill also implemented some prototype specialized kernel calls that showed promise for performance improvements. Luke Hornoff of University of Rennes contributed to discussions on specialization and tools development.
References
|
{"Source-Url": "https://digitalcommons.ohsu.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1030&context=csetech", "len_cl100k_base": 10355, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 48390, "total-output-tokens": 12289, "length": "2e13", "weborganizer": {"__label__adult": 0.00040078163146972656, "__label__art_design": 0.0003349781036376953, "__label__crime_law": 0.00032520294189453125, "__label__education_jobs": 0.000644683837890625, "__label__entertainment": 8.13603401184082e-05, "__label__fashion_beauty": 0.00017511844635009766, "__label__finance_business": 0.0003414154052734375, "__label__food_dining": 0.00035500526428222656, "__label__games": 0.0006418228149414062, "__label__hardware": 0.0027637481689453125, "__label__health": 0.0005450248718261719, "__label__history": 0.00037980079650878906, "__label__home_hobbies": 0.00010663270950317384, "__label__industrial": 0.0005621910095214844, "__label__literature": 0.000274658203125, "__label__politics": 0.0003101825714111328, "__label__religion": 0.0005483627319335938, "__label__science_tech": 0.07110595703125, "__label__social_life": 7.551908493041992e-05, "__label__software": 0.007030487060546875, "__label__software_dev": 0.91162109375, "__label__sports_fitness": 0.0003235340118408203, "__label__transportation": 0.0008020401000976562, "__label__travel": 0.0002290010452270508}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53476, 0.02406]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53476, 0.49761]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53476, 0.90698]], "google_gemma-3-12b-it_contains_pii": [[0, 771, false], [771, 771, null], [771, 2876, null], [2876, 6716, null], [6716, 10704, null], [10704, 14234, null], [14234, 17365, null], [17365, 23847, null], [23847, 26907, null], [26907, 30056, null], [30056, 33608, null], [33608, 35723, null], [35723, 39358, null], [39358, 43495, null], [43495, 47179, null], [47179, 50392, null], [50392, 53476, null]], "google_gemma-3-12b-it_is_public_document": [[0, 771, true], [771, 771, null], [771, 2876, null], [2876, 6716, null], [6716, 10704, null], [10704, 14234, null], [14234, 17365, null], [17365, 23847, null], [23847, 26907, null], [26907, 30056, null], [30056, 33608, null], [33608, 35723, null], [35723, 39358, null], [39358, 43495, null], [43495, 47179, null], [47179, 50392, null], [50392, 53476, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53476, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53476, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53476, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53476, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53476, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53476, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53476, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53476, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53476, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53476, null]], "pdf_page_numbers": [[0, 771, 1], [771, 771, 2], [771, 2876, 3], [2876, 6716, 4], [6716, 10704, 5], [10704, 14234, 6], [14234, 17365, 7], [17365, 23847, 8], [23847, 26907, 9], [26907, 30056, 10], [30056, 33608, 11], [33608, 35723, 12], [35723, 39358, 13], [39358, 43495, 14], [43495, 47179, 15], [47179, 50392, 16], [50392, 53476, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53476, 0.10177]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
9444fdb696f04806770a132f7372dc0fe939823b
|
Framework Documentation
A Minimalist Approach
Universidade do Porto
Faculdade de Engenharia
FEUP
Department of Electrical and Computer Engineering
September 2003
Framework Fundamentals
An object-oriented framework is a cohesive design and implementation artifact. Frameworks serve to implement larger-scale components, and are implemented using smaller-scale classes [Riehle, 2000].
Since its creation at the end of the 1980s, the concept of object-oriented frameworks has attracted a lot of attention from researchers and software engineers, resulting in many frameworks being developed in industry and academia, covering different application domains. The benefits from frameworks include reduced time to market and improved compatibility and consistency [Taligent Press, 1994; Fayad and Schmidt, 1997a; Fayad et al., 1999].
This chapter presents the fundamental characteristics of the framework concept. It reviews common terminology, associated object-orientation concepts, the key benefits of reusing frameworks, and the role of frameworks in the context of object-oriented software architecture. The chapter concludes with a brief history of frameworks, from the early Simula frameworks (1960s) till present.
2.1 What Is a Framework?
A framework is a reusable design together with an implementation. It consists of a collection of cooperating classes, both abstract and concrete, which embody an abstract design for solutions to problems in an application domain [Johnson and Foote, 1988; Deutsch, 1989; Campbell et al., 1991; Cotter and Potel, 1995; Gamma et al., 1995; Lewis et al., 1995; Fayad and Schmidt, 1997b; Fayad et al., 1999].
But frameworks are more than just collections of classes. Frameworks are also architectural. A framework defines the overall application structure, its partitioning into classes and objects, the key responsibilities thereof, how the classes and objects collaborate, and the thread of control. So, frameworks dictate the architecture of the applications we build with them, but still leave enough design space to accommodate particular solutions. By predefining design parameters that are invariant in an application domain, frameworks help application developers get the key architectural aspects right from the beginning, letting them concentrate on the specifics of their applications.
When using a framework, we reuse not only analysis and design but also implementations. With a framework, developers can build applications by extending or customizing only some parts, while reusing framework implementations of the rest and retaining the original design.
2.2 Frameworks and Reuse
The simple yet powerful vision of software reuse was introduced in 1968 [Naur and Randell, 1968] as a means to reduce the time and effort required to build and maintain high-quality software systems. In a broad sense, software reuse is the process of creating new software systems starting from existing software artifacts rather than building them from scratch.
Reuse does not happen by accident. We need to plan to reuse software, and to look for software to reuse. Reuse requires the right attitude, tools and techniques [Johnson and Foote, 1988]. Object-oriented frameworks are one reuse technique, actually a powerful one that enable large-scale reuse.
2.2.1 Reuse Techniques
There are several techniques for software reuse, each possibly using different artifacts. Reusable software artifacts include source code fragments, design structures, abstract specifications, and documentation, to mention a few. Tools and techniques to support software reuse are usually categorized in
compositional and generative approaches. While compositional approach is based on reusing software artifacts, the generative approach is based on reusing software development processes, often embodied in tools that help automate them. Source code components and application generators are just two examples of such approaches, respectively [Krueger, 1992].
A reuse technique must support one or all of the four important activities of software reuse, namely: abstracting artifacts; selecting artifacts, which includes classifying, finding and understanding them; specializing artifacts; and integrating artifacts [Biggerstaff and Richter, 1989].
The determination of the best reuse technique is often difficult to do, as it depends a lot on the specificities of the project at hands. As an intuitive gauge to compare the effectiveness of different reuse techniques, Krueger used the notion of cognitive distance [Krueger, 1992]. He informally defines it as the amount of intellectual effort that must be expended in developing a software system to go from the initial conceptualization to a specification expressed in abstractions of the reuse technique, and from these to an executable system.
An ideal reuse technique should let us quickly find components that exactly fit our needs, are ready to use without being customized, and don’t force us to learn how to use them. The developer’s ability to reuse software is limited primarily by his ability to reason in terms of the abstractions used by the reuse technique. In other words, the cognitive distance between informal reasoning and the abstract concepts of the technique must be small. Natural, succinct and high-level abstractions describing artifacts in terms of “what” they do, rather than “how” they do it, are thus very important for effective software reuse. From all the existing reuse techniques, the reuse of software architectures is probably the technique that comes closest to this ideal.
2.2.2 How Object-Orientation Leverages Software Reuse?
Historically, in the 1960s, reusable software components have been procedural libraries. In 1967, with their language Simula 67 [Dahl et al., 1970], Dahl and Nygaard have introduced most of the key concepts of object-oriented programming, namely objects, classes and inheritance, and with these concepts one of the main paradigms of programming have started, the object-oriented programming.
Simula concepts have been important in the discussion of abstract data types and models for concurrent program execution, starting in the early 1970s. Alan Kay’s group at Xerox PARC used Simula as a platform for their development of the first language versions of Smalltalk, in the 1970s, extending object-oriented programming importantly with the integration of
Object-oriented programming is today becoming the dominant style for implementing complex applications involving a large number of interacting components. Among the multitude of object-oriented languages are Smalltalk, Object Pascal, C++, Common Lisp Object System (CLOS), Eiffel, BETA, and SELF. In particular, the Internet-related Java (developed by Sun) has rapidly become widely used in the last 1990s.
With the integration of data and operations into objects and classes, reusability has increased. The classes were packaged together into class libraries, often consisting of classes for different data structures, such as lists and queues. Class libraries were further structured using inheritance to facilitate the specialization of their classes. As a result, class libraries became capable of delivering software reuse beyond traditional procedural libraries.
Object-oriented programming languages combine features, such as data abstraction, polymorphism and inheritance, that encourage the reuse of existing code instead of writing new code from scratch. These features are detailed later in Section 2.4.1 (p. 29).
Taking advantage of these features, object-oriented languages promote the development of class libraries, which, like the procedural libraries, are mainly focused on reuse of code. But code reuse has limitations, working best when the domain is narrow, well understood, and the underlying technology is very static. In the long run, reusing the design of an application is probably more beneficial in economical terms than reusing the implementation of any of its components [Biggerstaff and Richter, 1987], because design is the main intellectual content of software and it is by far more difficult to create and re-create then code [Deutsch, 1989].
Although object-orientation has started with object-oriented programming languages, it is more than object-oriented programming. Object-orientation covers also earlier phases of programming, such as analysis and design. Using a small set of concepts (objects, classes, and their relationships) developers can model an application domain (analysis), define an architecture to represent that model on a computer (design), and implement that architecture to let a computer execute the model (programming) [Booch, 1994].
As a whole, object-orientation introduced in software development more qualities that favor software reuse, namely, problem-orientation, resilience to
evolution, and domain analysis.
- **Problem-orientation.** The object-oriented models produced during analysis are all described in terms of the problem domain, which can be mapped directly to object-oriented concepts, such as classes, objects and relationships. This seamlessness from analysis to programming models makes them simpler to communicate between users and developers, and enables the delivery of better software products [Hoydalsvik and Sindre, 1993].
- **Resilience to evolution.** In an application domain, processes change more often than the entities. As object-oriented models are structured around the entities, they are more stable to changes and therefore less resilient to evolution [Meyer, 1988].
- **Domain analysis.** Object-oriented analysis is naturally extensible to domain analysis, a broader and more extensive kind of analysis that tries to capture the requirements of the complete problem domain, including future requirements [Schafer et al., 1994].
Taking advantage of all these qualities, reusability appeared as one of the great promises of object-orientation, based on the reuse of code through inheritance. But the efforts only provided reuse at the level of small-scale components, usable as primitive building blocks of new applications. Neither object-orientation nor class libraries made possible the reuse of large-scale components. This understanding led to the conception of object-oriented frameworks, a kind of large and abstract application specially designed to be tailored for the development of concrete applications in a particular domain. In the beginning of the 2000s, object-oriented frameworks represent the state-of-the-art in terms of object-oriented reusable products.
### 2.2.3 The Power of Frameworks
Since its conception at the end of 1980s, the appealing concept of object-oriented framework has attracted a lot of attention from many researchers and software engineers. During the 1990s, frameworks have been built for a large variety of domains, such as user interfaces, operating systems, and distributed systems.
A framework can be shortly defined as a reusable design of an application together with an implementation [Johnson and Foote, 1988; Campbell et al., 1991; Lewis et al., 1995; Fayad and Schmidt, 1997b; Fayad et al., 1999]. The definitions for a framework are not consensual and vary from author to author. In few words, a framework can be defined as a semi-complete design and implementation for an application in a given problem domain.
As mentioned before, frameworks are firmly in the middle of the reuse techniques. They are more abstract and flexible (and harder to learn) than components, but more concrete and easier to reuse than a raw design (but less flexible and less likely to be applicable). Frameworks are considered a powerful reuse technique because they lead to one of the most important kinds of reuse, the reuse of design. When compared to other techniques for reusing high-level design, such as templates [Spencer, 1988] or schemes [Katz et al., 1989], frameworks have the advantage of being expressed in a programming language, thereby resulting easier to learn and apply by programmers.
Frameworks and components are cooperating technologies. Software components are “binary units of independent production, acquisition, and development that interact to form a functioning system, with explicit interfaces and context dependencies only. A software component can only be deployed independently and is subject to composition by third parties.” [Szyperski, 1998]. Frameworks provide a reusable context for components, in the form of component specifications and templates for their implementation, thereby making it easier to develop new components.
Frameworks and design patterns are concepts closely related as well, representing two different categories of high-level design abstractions [Johnson, 1992]. A single framework typically encompasses several design patterns. Patterns provide an intermediate level of abstraction between the application level and the level of classes and objects.
A design pattern is commonly defined as a generic solution to a recurring design problem that might arise in a given context [Alexander et al., 1977; Gamma et al., 1995; Buschmann et al., 1996]. The relationships between individual patterns unfold in the application domain naturally and form a high level language, called a pattern language [Alexander 1977]. A pattern language represents the essential design knowledge of a specific application domain, i.e. the experience gained by many designers in solving a class of similar problems. Design patterns and pattern languages are particularly good for documenting frameworks because they capture design experience and enclose meta-knowledge about how flexibility was incorporated. Pattern languages help document the application domain of the framework, the design of the framework in terms of classes, objects and their relationships, and also the specifications of important framework classes.
The combined use of frameworks with patterns and components is very effective, significantly helping to increase software quality and reduce development effort [Fayad et al., 1999].
The benefits of frameworks stem primarily from the levels of code and design reuse being much higher than what is possible with other reuse
technologies, such as code generators and class libraries. In addition to the reusability benefits, other advantages are due to the inversion of control, the modularity and the extensibility that frameworks provide to developers.
In general terms, the benefits from frameworks include higher development productivity, shorter time-to-market and higher quality. However, framework benefits are not necessarily immediate, but only gained over time. As significant productivity gains usually start appearing after multiple uses of the technology, frameworks must be considered a medium-to-long term investment.
The benefits from frameworks impact in many phases of application development, from analysis to maintenance and evolution. During the analysis phase, frameworks help developers reduce the effort usually required to understand the overall application domain, and enable them to focus on the details of the application at hands.
It is during the design, coding, testing and debugging of applications that frameworks have more advantages over traditional application development. Most of the benefits result from the high levels of design and code reuse provided, and also the inversion of control (Section 2.4.3) possible with frameworks:
- provide guidance on application architecture;
- improve programming productivity and quality by reducing the amount of code to design and write;
- improve modularity and understandability by encapsulating volatile implementation details behind stable interfaces;
- promote the development of generic solutions reusable across an application domain;
- and improve application integrability and interoperability due to shared architecture and design.
The benefits of frameworks are not less important at maintenance and evolution phases. The reusability and extensibility possible with frameworks help to decrease the effort of maintenance due to its amortization over many application specific parts. Framework-based applications are also easier to evolve without sacrificing compatibility and interoperability because frameworks provide explicit hook methods that allow applications to modify or extend framework’s behavior.
In summary, through design and code reuse, frameworks help us reduce the amount of design we must create and the lines of code we must write, therefore significantly improving productivity. As a result, not only we can
build applications faster, but also build applications that are easier to maintain and more consistent, because they share a similar structure at all levels of software design.
As software systems evolve in complexity, object-oriented application frameworks are being successfully applied in more application domains and therefore becoming more important for industry and academia. Application frameworks offer software developers an important vehicle for reuse and a means of capturing the essence of successful architectures, patterns, components and programming mechanisms.
Perhaps the best evidence of the power of object-oriented frameworks is reflected on the well-known success of many examples of popular frameworks, such as: Model-View-Controller (MVC) [Goldberg, 1984], MacApp [Schmucker, 1986], ET++ [Weinand et al., 1989], Interviews [Linton et al., 1989], OpenDoc [Feiler and Meadow, 1996], Microsoft Foundation Classes (MFCs) [Prosise, 1999], IBM’s SanFrancisco [Monday et al., 2000], several parts of Sun’s Java Foundation Classes (RMI, AWT, Swing) [Drye and Wake, 1999], many implementations of the Object Management Group’s (OMG) Common Object Request Broker Architecture (CORBA), and Apache’s frameworks (Cocoon, Struts) [Apache, 1999]. Despite the existing difficulties of reusing frameworks, all the above examples of frameworks are playing, directly or indirectly, a very important role in contemporary software development.
### 2.3 Object-Oriented Software Architecture
Software design, and system design in general, take place at different levels. Each level has components, both primitive and composite, rules of composition guiding the construction of non-primitive components, and rules of behavior providing semantics for the system. For software, at least three design levels are usually identified [Shaw and Garlan, 1996]:
- an **architecture level**, where the design issues involve the overall organization of a system as a composition of components, the definition of global control structures, and the assignment of functionality to design elements;
- a **code level**, where the design issues involve algorithms and data structures;
- and an **executable level**, where the design issues involve memory maps, call stacks, register allocations, and machine code operations.
As the size and complexity of software systems increase, the most important design problems are no longer the design of the algorithms and data structures, but instead the design and specification of the overall system structure.
### 2.3.1 What is Software Architecture?
Software architecture is an emergent field of study in software engineering specifically concerned with software design at the architecture level. Its importance to software engineering practitioners and researchers has significantly increased during the 1990s, in response to the growing need for exploiting commonalities in system architectures, on making good choices among design alternatives, and describing high-level properties of complex systems.
According to Webster’s Dictionary, architecture is “the art or practice of designing and building structures...”. The main concern of software architecture is the design and building of structure, and not the individual building blocks that bring the structures into existence.
Abstractly, software architecture describes the components from which systems are built, and the interactions among those components—the connectors. Software architecture may also describe the rules and mechanisms that guide the composition of components and eventual constraints on those rules.
Components at the architecture level can be things such as clients, servers, databases, filters, and layers of a hierarchical system. Examples of connectors range from simple procedure calls to complex protocols, such as client-server protocols, database-accessing protocols, event multicast, and pipes.
### 2.3.2 Architectural Levels
Object-oriented software architecture is particularly interested on the architecture of object-oriented systems, that is, on architectures having objects and classes as their primitive building blocks. Current practice suggests four levels of granularity to describe an object-oriented system: the class level, the pattern level (micro-architecture), the framework level (macro-architecture), and the component level.
**Class level**
At the smallest level of granularity, a system is designed as a set of classes, whose instances cooperate to achieve some sophisticated behavior otherwise impossible with a single object.
A class represents a well defined concept or entity of the domain. An object is an instance of a class, has a state, exhibits some well-defined behavior, and
has a unique identity. The structure and behavior of similar objects are defined in their common class. Whereas an object is a concrete software entity that exists in time and space, a class represents only an abstraction, the essence of an object [Booch, 1994].
For small systems, objects and classes are sufficient means for describing their architecture. However, as a system becomes bigger, more and more classes get involved in its architecture, and higher-level abstractions are needed to help developers cope with the complexity of designing and implementing such systems.

Pattern level Immediately above the level of classes, we can use patterns to describe the micro-architectures of a system. A pattern names, abstracts, and identifies the key aspects of a design structure commonly used to solve a recurrent problem.
Succinctly, a pattern is a generic solution to a recurring problem in a given context [Alexander et al., 1977]. The description of a pattern explains the problem and its context, suggests a generic solution, and discusses the consequences of adopting that solution. The solution describes the objects and classes that participate in the design, their responsibilities and collaborations.
The concepts of pattern and pattern language were introduced in the software community by the influence of the Christopher Alexander's work, an architect who wrote extensively on patterns found in the architecture of houses, buildings and communities [Alexander et al., 1977; Alexander, 1979; Lea, 1994].
Patterns help to abstract the design process and to reduce the complexity of software because patterns specify abstractions at a higher level than single classes and objects. This higher-level is usually referred as the pattern level.
There are different kinds of patterns, of varying scale and level of abstraction, being usually classified in architectural patterns, design patterns, and idioms [Buschmann et al., 1996].
- **Architectural patterns** express fundamental structural organization schemes for software systems.
- **Design patterns** are medium-scale tactical patterns that reveal structural and behavioral details of a set of entities and their relationships. They do not influence overall system structure, but instead define micro-architectures of subsystems and components.
- **Idioms** (sometimes also called coding patterns) are low-level patterns that describe how to implement particular aspects of components or relationships using the features of a specific programming language.
Patterns represent useful mental building blocks for dealing with specific design problems of software system development.
**Framework level**
Object-oriented systems of medium size typically involve a large number of classes, some patterns, and few layers of cooperating frameworks. Frameworks are used to describe a system at an higher level than classes and patterns.
The concepts of frameworks and patterns are closely related, but neither subordinate to the other. Frameworks are usually composed of many design patterns, but are much more complex than a single design pattern. In relation to design patterns, a framework is sometimes defined as an implementation of a collection of design patterns.
A framework can also be seen as a representation of a specific domain under the form of a reusable design together with a set of implementations often
reusable and ready to instantiate.
A good framework has well-defined boundaries, along which they interact with clients, and an implementation that is usually hidden from the outside. Frameworks are a key part of medium to large-scale development, but even them have an upper limit to cope with high levels of complexity [Bäumer et al., 1997].
Component level
On the highest level of granularity, a system can be described as a set of large-scale components that work together to support a cohesive set of responsibilities. A component is defined in [Szyperski, 1998] as a “unit of composition with contractually specified interfaces and explicit context dependencies only; (...) (it) can be deployed independently and is subject to composition by third parties”. Examples of large-scale components are domain components, which are collections of related domain classes covering a well-defined application domain or a part of. Large components may or may not have been built from one or more object frameworks [Wegner et al., 1992], but in the case of an object-oriented system they typically are.
2.4 Definition of Concepts
An object-oriented framework is a reusable software architecture comprising both design and code. Although this statement is generally accepted by most authors, there are a number of different definitions for object-oriented frameworks that emphasize other aspects of the framework concept.
The most referenced definition is perhaps the one found in [Johnson and Foote, 1988], which says that: “a framework is a set of classes that embodies an abstract design for solutions to a family of related problems”. This definition captures the essential aspects of the object-oriented framework concept, namely: (1) a framework comprises a set of classes; (2) a framework embodies a reusable design; and (3) a framework addresses a family of problems in a domain.
Other definitions present other aspects of frameworks, which altogether help us get a better understanding of the concept. For example, Deutsch states that “a framework binds certain choices about state partitioning and control flow; the (re)user (of the framework) completes or extends the framework to produce an actual application” [Deutsch, 1989]. The first part of this definition emphasizes (4) the structural aspect of a framework, by stating that architectural design decisions have been taken. The second part explicitly describes the main purpose of a framework, which is (5) to be adapted to the problem at hands, namely by extending or completing some of its parts.
In [Gamma et al., 1995] a framework is defined as “a set of cooperating classes that make up a reusable design for a specific class of software”, which is based on the two definitions above mentioned.
The definition given in [Cotter and Potel, 1995] concisely presents almost all the aspects previously presented (all but the (4)): “A framework embodies a generic design, comprised of a set of cooperating classes, which can be adapted to a variety of specific problems within a given domain”.
In the following definition, in [Johnson, 1997], a framework is defined as “(...) the skeleton of an application that can be customized by an application developer”. This definition reinforces the structural aspect of a framework, and that future applications will conform to them by customizing parts of the framework. The activity of framework “adaptation” is referred in this definition as framework “customization”, but the essential meaning of both terms are similar. In the same reference, a framework is also defined as “(...) a reusable design of all or part of a system that is represented by a set of abstract classes and the way their instances interact”. This definition indicates that a framework doesn't necessarily need to cover a complete problem domain, but possibly only smaller parts of it, thereby suggesting the possibility of composing several frameworks together to build concrete applications. The wording “set of abstract classes” may suggest that the extension of a framework has to be done through inheritance, but this is not completely true as there are other ways of extending a framework, namely by composition.
Therefore, using a more complete and longer definition, we can define a framework as a software artifact:
- encompassing a set of cooperating classes, both abstract and concrete;
- expressed in a programming language, providing reuse of code and design;
- and specially designed to be customized, by inheritance or composition, for the construction of concrete solutions (systems, or applications) for a family of related problems within a specific problem domain.
Shortly, a framework emphasizes the more stable parts of an application domain, as well as their relationships and interactions, and provide customization mechanisms that let application developers solve their particular problems in the that domain.
2.4.1 Object-Orientation Concepts
Much of the reuse power of object-oriented frameworks comes from the
most distinguishing characteristics of object-oriented programming languages: data abstraction, inheritance, and polymorphism.
Data abstraction
Class definitions in an object-oriented language are primarily a data abstraction mechanism that enables the unification of data together with the procedures that manipulate them. Through abstraction and encapsulation, classes enable the separation of interfaces from implementations, and thus the change of implementation details without affecting its clients. As a result, classes can often serve as fine-grained reusable components.
Inheritance
In object-oriented languages, classes can be organized along hierarchies supporting different kinds of inheritance. Class inheritance allows the properties and behavior of a class to be inherited and reused by its subclasses. Inheritance in programming languages can be seen as a built-in code sharing mechanism that, without polymorphism and dynamic binding, won’t be much different from several module import mechanisms of traditional languages.
Polymorphism
This is a feature of object-oriented languages that enables a variable to hold objects belonging to different classes. When combined with overloading and dynamic binding, polymorphism becomes a powerful feature of object-oriented languages that enables to mix and match components, to change collaborators at runtime, and to build generic objects that can work with a wide range of components. Overloading makes it possible for several classes to offer and implement many operations with the same name, being up to the compiler or runtime environment to disambiguate references to a particular operation.
The combination of these features allows for a greater flexibility in programming. Due to polymorphism, a single variable in a program can have many different types at run-time. Inheritance provides a way of controlling the range of types a variable can have, by allowing only type mutations within an inheritance tree. Finally, dynamic binding enables delaying until run-time the determination of the specific operation implementation (method) to be called in response to an operation request, when the actual types of the variable and operation parameters are known [Meyer, 1988; Booch, 1994].
Two of the most distinguishing features of the framework concept rely heavily on the use of dynamic binding: the extensive use of template and hook methods, and the inversion of control flow.
2.4.2 Template and Hook Methods
Frameworks are designed and implemented to fully exploit the use of dynamically bound methods. To illustrate this, we will present a simple
example of a hypothetical single class framework for unit testing.
The framework consists of a single abstract class named `testCase`. This class has three operations named `setUp`, `runTest` and `tearDown`. In order to implement tests for database connection operations, or mathematical operations, for example, the framework is supposed to be extended with concrete subclasses, such as `DBConnectionTest` or `MoneyTest`.
As different tests usually have different ways of being setup, executed, and terminated, the framework, i.e. the `testCase` class, doesn't provide implementations for these operations, being up to framework users to provide them. Although the details of concrete test implementations may differ, the overall running of a test is always the same, consisting of: the setup of the test, the running of the concrete test, and its finalization.
To capture this commonality between different test implementations, the framework implements a generic operation to run tests. This generic operation is named `run`, and its implementation is responsible for handling the invocation of the `setUp`, `runTest` and `tearDown` operations. In other words, to run any test case, it is only needed to invoke the `run` operation of `testCase` being up to `run` to do the rest. An illustrative implementation of the `testCase` class using the Java programming language is shown in Figure 2.2.
Concrete subclasses of `testCase`, such as `MoneyTest`, should provide implementations for the `setUp`, `runTest` and `tearDown` operations. Due to the mechanism of dynamic binding, when the `run` operation is called on an instance of `MoneyTest`, it is the `run` operation of `testCase` that will be used (if not overridden in `MoneyTest`). The method `run` of `testCase` will then invoke the `setUp`, `runTest` and `tearDown` operations implemented in the `MoneyTest` class.
```
abstract public class TestCase {
public void run(){
setUp();
try {
runTest();
} finally {
tearDown();
}
}
abstract protected void setUp(){ }
abstract protected void runTest(){ }
abstract protected void tearDown(){ }
}
```
Figure 2.2 The class `testCase`.
A Minimalist Approach to Framework Documentation
The point here deserving attention is the fact of an operation in a superclass, the `run` operation of `TestCase`, being able to call operations in subclasses, and therefore (the superclass) having control over the execution flow of the overall test sequence. The `run` operation is often called a template method, and the `setUp`, `runTest` and `tearDown` operations are called hook methods.
Template and hook methods are two kinds of methods extensively used in the implementation of frameworks. These terms are commonly used by several authors in [Wirfs-Brock et al., 1990; Pree, 1991; Gamma et al., 1993; Pree, 1995].
*Template methods* are implemented based on hook methods, and call at least one other method. A *hook method* is an elementary method in the context which the particular hook is used, and can be either an abstract method, a regular method, or another template method. An *abstract method* is a method for which only the interface is provided, and thus lack an implementation. A *regular method* is a method that doesn’t call hook or template methods, but only provides a meaningful implementation.
Generally, template methods are used to implement the frozen spots of a framework, and hook methods are used to implement the hot spots. The *frozen spots* are aspects that are invariant along several applications in a domain, possibly representing abstract behavior, generic flow of control, or common object relationships. The *hot spots* of a framework are aspects of a domain that vary among applications and thus must be kept flexible and customizable.
The difficulty of good framework design resides exactly on the identification of the appropriate hot spots that provide the best level of flexibility required by framework users. More hot spots offers more flexibility, but results in a framework more difficult to design and use, so somewhere in between resides a balanced design.
In our simple testing framework example, the `run` template method implements the overall execution of a test case (a frozen spot), which consists on preceding the execution of the test case with a setup, followed by a tear down operation responsible to release any resources eventually used during the test. All these operations are supposed to be provided by the hook methods `setUp`, `runTest` and `tearDown`, which are abstract methods. The class `TestCase` is considered an abstract class because it has at least one abstract method (actually it has three).
Template and hook methods can be organized in several ways. Although they can be unified in a single class, as in our example, in most of the situations it is better to put frozen spots and hot spots into separate classes. When using separate classes, the class that contains the hook method(s) is
considered the hook class of the class containing the corresponding template method(s)—the template class. We can consider that hook classes parameterize the corresponding template class. The hook methods on which a template method is based can also be organized in different ways. They can be defined all in the same class, or in separate classes, in a superclass or subclass of the template class, or in any other class.
In [Pree, 1995] are identified several ways of composing template and hook classes, and presented under the form of a set of patterns, globally called meta-patterns. Meta-patterns categorize and describe the essential constructs of a framework, on a meta-level. Design patterns provide proven solutions to recurrent design problems and are extremely useful to design object-oriented frameworks.
In our framework example, template and hook methods are unified in a single class, because the object providing the template method is not separated from the objects providing the hook methods, actually being an instance of MoneyTest (an instance of MoneyTest is also an instance of TestCase). This organization of template and hook methods is classified as the Unification meta-pattern, which corresponds to the simplest way of organizing template and hook methods. In Figure 2.3, this meta-pattern is represented attached to the classes of our example.

**Figure 2.3** The Unification meta-pattern attached to the testing framework.
With this unification meta-pattern, the developer must provide a subclass to
adapt the behavior of running a test, and this can’t be done at run time. Organizations that separate template classes and hook classes are called *Connection meta-patterns*, which allow the modification of the behavior of a $T$ object by composition, that is, by plugging in specific $H$ objects. The more sophisticated way of separating template and hook classes, called *Recursive meta-patterns*, occurs when the template class is a descendant of the hook class, which enables the composition of whole graphs of objects. In Figure 2.4 we show the basic differences of unification, connection and recursive meta-patterns.
**Figure 2.4** Unification, connection, and recursive meta-patterns.
With the single class framework example we have illustrated the usage of inheritance and dynamic binding for operations in one single class. By scaling up the example to a larger framework, with more abstract classes, more template and hook methods organized according to more powerful meta-patterns, we can have a better idea of the potential reuse power that well-designed frameworks can deliver to their users.
### 2.4.3 The Flow of Control in Framework-based Applications
The development of applications reusing frameworks leads to an inversion of control between the application and the software on which it’s based. When we use a class library, we write the main body of the application and call the code we want to reuse. When we use a framework, we reuse the main body and write the code it calls [Gamma et al., 1995]. By consequence, the code to be written must satisfy particular names and calling conventions defined by the framework, what reduces the design decisions we need to do. This inversion of control is characteristic to frameworks and is referred as the *Hollywood Principle*, meaning “Don’t call us, we’ll call you” [Cotter and Potel, 1995; Bosch et al., 1999].
This inversion of control flow in programs is an idea that has evolved over years of application development, passing by different ways of structuring programs: from procedural programs, to event-loop programs, and then to framework programs (Figure 2.5).
**Procedural programs**
In *procedural programs*, all the code for control flow is provided by the programmer. The program is executed sequentially, always under the programmer's control, and when necessary calls procedures from libraries provided by the operating system. The system takes action only when it is called by the program.
**Event-loop programs**
When using graphical user interfaces, sequential control flow is no longer appropriate, as end users may select when and which actions to perform. A solution to this problem led to the concept of *event-loop programs*, which let the user choose the order in which events happen, through the interaction with input devices (mouse, keyboard, etc.). These programs have an event loop that is responsible to sense user events and call the corresponding parts of the program configured to handle them, remaining programmer’s responsibility the flow of control within these parts.
**Framework programs**
Framework-based applications turn over control to the user, as happens with event-loop programs, and then to the original framework developers. The framework code assumes almost all flow of control, calling application code only when necessary. Calls are however not made exclusively in one direction: application code often calls framework code too. As a result of this two-way flow of control, it is not needed to design and write the control code required by event-loop programs or other code common to many applications that can be written once and reused many times afterwards. Ideally, with frameworks we design and write only a small part of the total
The shifting of control flow is a question of degree, and not absolute. We can say that a program exists on a scale somewhere between 0% and 100% framework-owned control flow. When developing applications using frameworks the goal is to shift the control flow as much as possible to the framework.
Back to our example, we will now analyze the oscillation of the flow of control between the framework and the application. As described before, the framework of this very simple example consists of a single class (TestCase) which is only customizable by inheritance. The application (MoneyTest) customizes the framework by providing implementations for the abstract hook methods setUp, runTest and tearDown.
The flow starts in the main method of the application's code. A MoneyTest object is created, and its run method is called. Due to the mechanism of dynamic binding, the run method selected to be executed is the one implemented in TestCase, the superclass of MoneyTest, and thereby the control flow is transferred to the framework. The run method then starts and calls the setUp method, declared as abstract in TestCase and implemented in MoneyTest. Now, the dynamic binding mechanism selects to be executed the setUp method implemented in MoneyTest, and thereby the control flow is returned back to the application. When the setUp method terminates, the control flow turns back to the framework, and then to the application again in order to execute the runTest method implemented in MoneyTest, and so on until the end of the main method.
The Figure 2.6 graphically describes how the control flow have oscillated back-and-forth from the application to the framework, until the moment of calling the runTest method.
The mechanism used by this framework to call application-specific code relies on deriving application-specific classes (MoneyTest) from the base classes provided by the framework (TestCase), and on overriding their methods (setUp, runTest, tearDown).
While this customization mechanism focus on inheritance, there are other mechanisms that rely on composition. Both kinds of mechanisms have advantages and drawbacks. The most significant difference is on how they trade-off flexibility of customization with run time adaptability. Inheritance based mechanisms offer a good extension flexibility but don't support adaptation at run time. Composition based mechanisms requires explicit definition of points of flexibility but support adaptation at run time.
Inheritance and composition based mechanisms lead to two broad categories of frameworks: black-box and white-box frameworks.
2.4.4 Classifying Frameworks
Frameworks are typically classified according to the extension techniques provided and their scope of work.
White-box and black-box frameworks
Based on the extension techniques provided, frameworks can be classified in a range along a continuum from white-box frameworks to black-box frameworks [Johnson and Foote, 1988], as illustrated in Figure 2.7.
Figure 2.6 Illustration of control flow in the framework example.
Figure 2.7 Classification of frameworks based on the extension technique
White-box frameworks rely heavily on inheritance and dynamic binding in order to achieve extensibility. Although white-box reuse is the hardest way to
use a framework, it is by far the most powerful.
Black-box frameworks are the easiest to use, because they are structured using object composition and delegation rather than inheritance. On the other hand, black-box frameworks are the most difficult to develop, because they require the definition of the right interfaces and hooks able to anticipate a wide range of application requirements.
Most real-world frameworks combine black-box and white-box characteristics, being thus called gray-box frameworks. They allow extensibility both by using inheritance and dynamic binding, as well as by defining interfaces. Gray-box frameworks are designed to avoid the disadvantages of black-box frameworks and white-box frameworks.
In addition to the classification above, frameworks can also be classified according to their scope of work. In [Fayad and Schmidt, 1997b] is proposed a classification for frameworks based on their scope which consists of three categories: system infrastructure frameworks, middleware integration frameworks, and enterprise application frameworks [Fayad and Schmidt, 1997b].
System infrastructure frameworks aim to simplify the development and support of system infrastructure areas such as operating systems, user interfaces, communications, and language processing. Graphical user interface (GUI) frameworks, Java Foundation Classes (JFC), Microsoft Foundation Classes (MFC), or MacApp, are examples of frameworks used as underlying frameworks for other applications.
Middleware integration frameworks are usually used to integrate distributed applications and components. Examples of middleware integration frameworks include ORB frameworks, message-oriented middleware, and transactional databases.
Enterprise application frameworks address large application domains, such as telecommunications, banking, or manufacturing, and can provide a substantial return on investment as they support directly the development of end-user applications. A famous example of an enterprise framework is the IBM SanFrancisco Project.
These kinds of frameworks are related, as they layer up on top of each other. Middleware integration frameworks usually includes a system infrastructure in its underlying layer. Similarly, an enterprise framework includes both a middleware integration framework and a system infrastructure in the underlying layers.
2.5 History of Frameworks
Although the framework concept reached popularity recently (1990s), the history of frameworks dates back to the 1960s. The first examples of the framework concept found in the literature were designed to solve mathematical problems in Simula (1960s) and Smalltalk (1970s).
2.5.1 Early frameworks
The Simula programming language, created more than 30 years ago (1967), has precipitated the invention of the concepts of object-oriented programming. Simula is particularly important for framework technology because it was specifically designed to support frameworks, or application-oriented extensions, as they were then called. Simula was designed as a minimal addition to Algol, extending it with the basic concepts of object-oriented programming: objects, classes, inheritance, virtual methods, references, and a type system. The object concepts first introduced by Simula have percolated into most current object-oriented languages, such as C++ or Java.
It is generally accepted that the most significant distinction between a framework and a mere class library of classes depends on the presence of inverted control. In other words, the possibility that code in the framework may call code in the user part. In primitive languages this is implemented with callbacks, that is, procedure parameters. In most object-oriented languages, inversion of control is achieved through virtual procedures, which are declared and invoked by the framework code, but whose implementations can be redefined by the user code. Simula has virtual procedures, but also has an inner mechanism, which has the same characteristics of a framework calling user code. Beta [Madsen et al., 1993] is the only other language also having this mechanism [Hedin and Knudsen, 1999].
Simula provides a standard library containing two frameworks, Simset for list handling and the Simulation for discrete-event simulation. Each framework consists of a single packaging class that contains all the component classes, procedures, and variables. An application program obtains these capabilities by using the framework name as a prefix to the program. Simset framework implements two-way circular lists. Simulation is a framework that allows the language to handle the discrete event Simulation [Birritch, 1979]. By means of these two object-oriented frameworks, Simula provides superb facilities for simulation, namely pseudo-parallelism, real time abilities, and simulation of complex systems. In addition, with Simula it is particularly easy to combine quite diverse frameworks.
2.5.2 GUI frameworks
In the late 1970s, the emerging interactive paradigm of Graphical User Interfaces (GUI) based systems made windows and events to stand up as a new challenging domain for programmers, for which they need help to write software.
The difficulty of coding GUI applications directly on top of the complex procedural application programming interfaces (APIs) provided by the most popular GUI systems (Macintosh, X Window System, and Microsoft Windows) started a growing demand for finding better ways of developing software solutions.
The Smalltalk-80 user interface framework, named Model-View-Controller (MVC) and developed in the late 1970s, was perhaps the first widely used framework [Goldberg, 1984; Krasner and Pope, 1988]. MVC showed at that time (and continues showing today) that object-oriented programming is well suited for implementing GUIs. MVC divides an user interface into three kinds of components working in trios: a view and a controller interacting with a model.
One of the first user interface frameworks influenced by MVC was MacApp, which was developed by Apple Inc. to support the implementation of Macintosh applications [Schmucker, 1986]. MacApp was followed by user interface frameworks from universities, such as Interviews from Stanford University [Linton et al., 1989], and ET++ from the University of Zurich [Weinand et al., 1989].
MacApp, InterViews, and ET++ became very popular during the 1980s. These frameworks provided useful, generic abstractions for drawing views and windows, and offered an event-handling mechanism based on the MVC concept. Most importantly, with any of these frameworks, the writing of an application became much easier, and resulted in a more stable code base, than directly using the APIs provided by the respective GUI systems.
But frameworks are not limited to user interfaces, being applicable to basically any area of software design. They have been applied to the domains of operating systems [Russo, 1990], very large scale integration (VLSI) routing algorithms [Gossain, 1990], hypermedia systems [Meyrowitz, 1986], structured drawing editors [Vlissides and Linton, 1990; Beck and Johnson, 1994], network protocol software [Hueni et al., 1995], and manufacturing control [Schmidt, 1995], to mention a few.
2.5.3 Taligent frameworks
In 1992, Apple and IBM have founded Taligent as a joint venture, which was
joined by Hewlett-Packard in 1994. Taligent goal was to develop a fully object-oriented operating system and portable application environment, which shipped in July 1995 as the CommonPoint Application System. CommonPoint was a set of tools for rapid application development consisting of more than a hundred small object-oriented frameworks [Andert, 1994; Cotter and Potel, 1995] running on top of OS/2, Windows NT, AIX, HP-UX, and a new Apple OS kernel.
CommonPoint was most similar in scope and portability to Sun’s subsequent Java environment, but based on C++ and without a virtual machine and a new object programming language (Java). The CommonPoint development environment was a visual component-based incremental development environment akin to the now-familiar IBM’s VisualAge or Borland’s JBuilder IDE’s. The CommonPoint user interface paradigm known as “People, Places, and Things”, extended the personal computer desktop metaphor to collaborative, distributed, task-centered workspaces that anticipated today’s web-based environments. In terms of framework technology, Taligent’s approach for CommonPoint made a shift in focus away from large monolithic frameworks to many fine-grained integrated frameworks.
In 1996, IBM took over sole ownership of Taligent, and in 1998 formally merged Taligent into IBM. During these two years, Taligent was an important center for object technology, providing key software components to IBM development tools, and licensed other key Java and C++ technologies to other industry partners, such as Sun, Netscape, Oracle and others. After 1998, Taligent engineering teams continued their development of object technologies and products at IBM.
2.5.4 Frameworks today (2000s)
The influence of the new GUI frameworks and the Taligent’s innovative technological approach have attained a lot of interest to the framework concept, and both widely promoted frameworks in larger communities.
At present, in the 2000s, frameworks are important and are becoming even more important as software systems increase in size and complexity. Component systems such as OLE, OpenDoc, and Beans, are frameworks that solve standard problems of building compound documents and other composite objects. Frameworks like Microsoft Foundation Classes (MFCs), many parts of Sun’s Java Development Kits (AWT, Swing, RMI, JavaBeans, etc.), implementations of the Object Management Group’s (OMG) Common Object Request Broker Architecture (CORBA), IBM’s WebSphere, SanFrancisco, Apache’s frameworks (Struts, Turbine, Avalon, etc.), Eclipse framework for integrated development environments, and JUnit testing framework, are all very important in contemporary software development.
Since its appearance in 1995, Sun’s Java has been one of the most successful, innovative and evolving language and frameworks (JavaBeans, Java Foundation Classes, Enterprise JavaBeans components, JavaOS, etc.). Java supports many platforms, from very small ones as smartcards, thin and thick clients to large mainframe installations. Java 2 Enterprise Edition (J2EE) has become one of the most successful frameworks for the web and enterprise technology.
In 2001, a new framework called .NET has emerged from Microsoft. It has many similar features to J2EE and will probably be one of the closest competitors of J2EE. Although both frameworks stand on a same foundation of programming languages, object models and virtual machines, they are different when considering the design goals of their runtime environment, namely the portability of code to different platforms: .NET uses a common intermediate language, and J2EE uses bytecode for a virtual machine. These two dominating frameworks promise to compete very closely in the next few years to come.
|
{"Source-Url": "https://web.fe.up.pt/~aaguiar/as2004-2005/FrameworkFundamentals-AdemarAguiar-thesis-extract.pdf", "len_cl100k_base": 11068, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 54876, "total-output-tokens": 12414, "length": "2e13", "weborganizer": {"__label__adult": 0.0003833770751953125, "__label__art_design": 0.0003736019134521485, "__label__crime_law": 0.0002524852752685547, "__label__education_jobs": 0.0007734298706054688, "__label__entertainment": 4.845857620239258e-05, "__label__fashion_beauty": 0.00014519691467285156, "__label__finance_business": 0.00015091896057128906, "__label__food_dining": 0.00028896331787109375, "__label__games": 0.0004148483276367187, "__label__hardware": 0.0005154609680175781, "__label__health": 0.0002880096435546875, "__label__history": 0.00020503997802734375, "__label__home_hobbies": 6.711483001708984e-05, "__label__industrial": 0.00025272369384765625, "__label__literature": 0.0002243518829345703, "__label__politics": 0.0002135038375854492, "__label__religion": 0.0004062652587890625, "__label__science_tech": 0.0021419525146484375, "__label__social_life": 6.306171417236328e-05, "__label__software": 0.00322723388671875, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.000263214111328125, "__label__transportation": 0.00036406517028808594, "__label__travel": 0.0001933574676513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57241, 0.01698]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57241, 0.84376]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57241, 0.92527]], "google_gemma-3-12b-it_contains_pii": [[0, 165, false], [165, 1221, null], [1221, 3628, null], [3628, 6403, null], [6403, 8852, null], [8852, 11376, null], [11376, 14228, null], [14228, 16625, null], [16625, 18939, null], [18939, 21364, null], [21364, 22265, null], [22265, 24829, null], [24829, 27396, null], [27396, 29858, null], [29858, 32486, null], [32486, 34753, null], [34753, 37529, null], [37529, 39098, null], [39098, 40980, null], [40980, 42856, null], [42856, 45463, null], [45463, 46140, null], [46140, 48510, null], [48510, 51087, null], [51087, 53487, null], [53487, 56188, null], [56188, 57241, null]], "google_gemma-3-12b-it_is_public_document": [[0, 165, true], [165, 1221, null], [1221, 3628, null], [3628, 6403, null], [6403, 8852, null], [8852, 11376, null], [11376, 14228, null], [14228, 16625, null], [16625, 18939, null], [18939, 21364, null], [21364, 22265, null], [22265, 24829, null], [24829, 27396, null], [27396, 29858, null], [29858, 32486, null], [32486, 34753, null], [34753, 37529, null], [37529, 39098, null], [39098, 40980, null], [40980, 42856, null], [42856, 45463, null], [45463, 46140, null], [46140, 48510, null], [48510, 51087, null], [51087, 53487, null], [53487, 56188, null], [56188, 57241, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57241, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57241, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57241, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57241, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57241, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57241, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57241, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57241, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57241, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57241, null]], "pdf_page_numbers": [[0, 165, 1], [165, 1221, 2], [1221, 3628, 3], [3628, 6403, 4], [6403, 8852, 5], [8852, 11376, 6], [11376, 14228, 7], [14228, 16625, 8], [16625, 18939, 9], [18939, 21364, 10], [21364, 22265, 11], [22265, 24829, 12], [24829, 27396, 13], [27396, 29858, 14], [29858, 32486, 15], [32486, 34753, 16], [34753, 37529, 17], [37529, 39098, 18], [39098, 40980, 19], [40980, 42856, 20], [42856, 45463, 21], [45463, 46140, 22], [46140, 48510, 23], [48510, 51087, 24], [51087, 53487, 25], [53487, 56188, 26], [56188, 57241, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57241, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
3d4b3e91ad4d43675cf450b40ec22ec0f15fabac
|
Concurrent determination of connected components
Wim H. Hesselink *,1, Arnold Meijster 2, Coenraad Bron
Department of Mathematics and Computing Science, University of Groningen, P.O. Box 800, 9700 AV Groningen, Netherlands
Received 4 September 1998; received in revised form 5 August 2000; accepted 5 August 2000
Abstract
The design is described of a parallel version of Tarjan’s algorithm for the determination of equivalence classes in graphs that represent images. Distribution of the vertices of the graph over a number of processes leads to a message passing algorithm. The algorithm is mapped to a shared-memory architecture by means of POSIX threads. It is applied to the determination of connected components in image processing. Experiments show a satisfactory speedup for sufficiently large images. © 2001 Elsevier Science B.V. All rights reserved.
Keywords: Connected components; Parallel algorithm; Pthreads; Mutex; Condition variable
1. Introduction
In many image processing applications, one of the first steps is to compute the connected components of the image. For this purpose one usually takes the simple breadth first scanning algorithm, which stems from the corresponding problem in graph theory. This algorithm has the disadvantages that it requires a FIFO-queue the size of which is a priori unknown, and that it is hard to parallelize. The number of pixels involved is often large, say more than a million, and for real-time applications often several images must be processed per second. It is therefore important to have an efficient parallel algorithm for this task. This is confirmed by the fact that there are many articles on parallel image component labelling. Most of these articles aim at distributed memory architectures, e.g., cf. [2,7,9].
Two classical sequential algorithms that explicitly use the fact that the graph is an image, are given in [11,13]. The main drawback of these algorithms is the use of a large equivalence table. Inspired by these two algorithms and Tarjan’s disjoint set algorithm [15], we here present an algorithm that does not need such a large table, and
* Corresponding author.
E-mail addresses: wim@cs.rug.nl (W.H. Hesselink), a.meijster@rc.rug.nl (A. Meijster), cb@cs.rug.nl (C. Bron).
1 http://www.cs.rug.nl/~wim
2 http://www.rug.nl/hpc/people/arnold
that can elegantly be parallelized. The sequential algorithm on itself is not new [8], but as far as we know there does not exist a parallel implementation of this algorithm, which is the main focus of this paper. The algorithm can be implemented on distributed as well as shared memory machines.
The algorithm determines a directed spanning forest for an undirected graph by placing links that are not necessarily along the edges of the graph. It is meant for large graphs, the nodes of which are distributed over a relatively small number of processes, preferably in such a way that most of the edges belong to only one process. In this respect, the situation differs from settings as investigated in [12,16], where the processes are in one-to-one correspondence with the nodes. Indeed, a typical setting for our algorithm could be a medical application used by medical specialists to analyse 3-D CT-images of a brain. In that case, the graph may have in the order of $10^8$ points and the computation can be distributed over, say, four up to 16 processors.
Although we are especially interested in the application to images, we present the algorithm for general undirected graphs. The design goes through four stages. We first give a version of Tarjan’s sequential algorithm, then distribute this over several processes with message passing. This design is then mapped to a shared memory architecture by means of mutual exclusion and synchronization. Finally, the mutual exclusion and synchronization are implemented by means of POSIX thread primitives.
The resulting algorithm is a concurrent one in which the amount of communication is decided at runtime. Such algorithms are very error prone. Our presentation may seem to focus on logic, but that is not the case. Since we want a working algorithm, our focus is on correctness, i.e., preservation of invariants, avoidance of deadlock, and guarantee of progress. Logical formulae are the only way to unambiguously express the properties needed.
Since we want to avoid unnecessary communication, we use no path compression beyond the parts of the graph under control of a single process. If the vertices of the graph are distributed randomly over the processes, this leads to bad worst case performance (i.e. quadratic in the lengths of the paths). In practice, however, there is often a natural way to distribute the nodes over the processes such that most edges adjacent to a node belong to only one process. In that case, the performance of the algorithm is quite good.
We finally describe the application to the determination of connected components in images. Since images are usually more or less constant locally, we sketch an optimization that can reduce the number of communications needed significantly. The results show that the algorithm makes distribution quite effective.
Overview: In Section 2 we give the abstract problem and develop a sequential solution. In Section 3, the algorithm is distributed over several processes in an asynchronous way. In Section 4, we specialize to a shared memory architecture in bounded space with atomicity brackets and await statements. In Section 5, these constructs are implemented by means of POSIX thread primitives. Section 6 describes the finalization of the algorithm. Section 7 contains the application to image processing. We draw conclusions in Section 8.
2. The problem and a sequential solution
In the image processing context, points of an image are regarded as directly connected if they are neighbours and have (nearly) the same colour or grey value. The problem is to determine the connected components of the image. Since images contain many points, and since one may want to process many subsequent images in real time, there is reason to consider distributed solutions. Graph theory is the proper abstract setting for any discussion of connected components.
We therefore let the image be represented by an undirected graph. The aim is to determine its connected components by means of a distributed algorithm. Our first step is the design of a sequential algorithm, which is a variation of Tarjan’s algorithm, cf. [15, Chap. 2; 14, 12.3].
Let \((V, E)\) be an undirected graph. We regard \(E\) as a (symmetric binary) relation. The connected components of the graph are the equivalence classes of the reflexive transitive closure \(E^*\) of \(E\). The idea is to represent the components by rooted trees by means of an array variable
\[
\text{par} : \text{array } V \text{ of } V ,
\]
which stands for “parent”. We define function \(\text{root} : V \rightarrow V\) by
\[
\text{root}(n) = \begin{cases} n & \text{if } \text{par}[n] = n \\ \text{root}(\text{par}[n]) & \text{else} \end{cases}.
\]
Since \(V\) is finite, function \(\text{root}\) is well defined if and only if the directed graph induced by the arrows of \(\text{par}\) has no cycles of length \(> 1\). We want to establish the postcondition that function \(\text{root}\) is well defined and satisfies
\[
Q: \ (\forall m, n :: (m, n) \in E^* \equiv \text{root}(m) = \text{root}(n)) .
\]
In order to establish \(Q\), we introduce the equivalence relation \(\text{Con}\) given by
\[
(m, n) \in \text{Con} \equiv \text{root}(m) = \text{root}(n) .
\]
Now \(Q\) is equivalent to \(E^* = \text{Con}\).
We assume that the initialization establishes \(\text{par}[n] = n\) for all \(n \in V\). Then function \(\text{root}\) is well defined and relation \(\text{Con}\) is equal to the identity. We shall modify array \(\text{par}\) in such a way that function \(\text{root}\) remains well defined and that relation \(\text{Con}\) is only extended. We therefore only modify \(\text{par}\) by assignments of the form \(\text{par}[x] := y\) under one of the preconditions
\[
P0(x, y): \ (\exists k : k \geq 1 : \text{par}^k[x] = y) ,
\]
\[
P1(x, y): \ \text{par}[x] = x \land \text{root}(y) \neq x .
\]
Here, \(\text{par}^k[x]\) is obtained by \(k\) subsequent applications of \(\text{par}\) on index \(x\). In the case of \(P0(x, y)\), node \(y\) is an ancestor of \(x\) and the assignment \(\text{par}[x] := y\) does not modify relation \(\text{Con}\). Such an assignment is called path compression, cf. [1]. In case
of $P_1(x, y)$, node $x$ is a root and not an ancestor of $y$. Since $y$ becomes the parent of $x$, relation $\textit{Con}$ is strictly extended.
We now come to the edge relation $E$ of the graph. Since we do not want to store every unordered pair twice, we assume that relation $E$ is represented by a set $\textit{edlis}$ of pairs of nodes via the initial relation $E = \text{sym}(\textit{edlis})$ where function $\text{sym}$ is defined by
$$(m, n) \in \text{sym}(R) \equiv (m, n) \in R \lor (n, m) \in R.$$
We take $\textit{edlis}$ to be a program variable and introduce the loop invariant
$$J_0 : E^* = (\textit{Con} \cup \text{sym}(\textit{edlis}))^*.$$
Predicate $J_0$ holds initially, since then $\text{sym}(\textit{edlis}) = E$ and $\textit{Con}$ is the identity. If $\textit{edlis}$ is empty, $J_0$ implies predicate $Q$ since $\textit{Con}$ is an equivalence relation. We therefore take $\textit{edlis} \neq \emptyset$ as the guard of the loop.
Now the abstract sequential algorithm is
A:
\begin{verbatim}
while \textit{edlis} \neq \emptyset do
fetch $(u, v)$ from \textit{edlis} ;
$\text{Extend}$
od ,
\end{verbatim}
where command $\text{Extend}$ has to restore predicate $J_0$ if it is falsified by the removal of $(u, v)$ from $\textit{edlis}$. Restoration can be done by placing a \texttt{par} pointer between the components of $u$ and $v$. We therefore search for elements $x, y$, connected to $u$ and $v$, that satisfy $P_1(x, y)$. We thus introduce the predicate
$$JE : (u, x), (v, y) \in \textit{Con} \lor (u, y), (v, x) \in \textit{Con}$$
and describe $\text{Extend}$ by
\texttt{Extend}:
\begin{verbatim}
if $(u, v) \notin \textit{Con}$ then
choose $x, y$ with $P_1(x, y) \land JE$ ;
$\text{par}[x] := y$
fi .
\end{verbatim}
It is easy to see that in this way $J_0$ is preserved and that, consequently, algorithm A is correct. So it remains to implement $\text{Extend}$. Since relation $\textit{Con}$ is not directly available, we implement $\text{Extend}$ by means of a loop with $JE$ as invariant. Since $\textit{Con}$ is an equivalence relation, $JE \land x = y$ implies $(u, v) \in \textit{Con}$. We can therefore refine $\text{Extend}$ as follows.
\texttt{Extend}:
\begin{verbatim}
x := u ; y := v \{JE\} ;
while $x \neq y \land \neg P_1(x, y)$ do
modify $x, y$ while preserving $JE$
od ;
if $x \neq y$ then $\text{par}[x] := y$ fi .
\end{verbatim}
For the ease of distributed verification of the inequality $\text{root}(y) \neq x$ in $P_1(x, y)$, we assume that the set $V$ has a total order $\leq$ and introduce the additional invariant (cf. [14, p. 261]):
$$J1: \text{par}[n] \leq n.$$
Here and henceforth, we use the convention that all invariants are universally quantified over the free variables they contain (here $n$).
We now decide that the loop in $\text{Extend}$ preserves the invariant $JE \land x \geq y$. We therefore assume that the pairs in $\text{edlis}$ are ordered by
$$J2: (m,n) \in \text{edlis} \Rightarrow m > n.$$
In the body, we replace $x$ by $\text{par}[x]$ and, if necessary, restore $x \geq y$ by swapping. Now the guard of the loop can be simplified since $J1 \land x \geq y$ implies
$$P_1(x, y) \equiv \text{par}[x] = x \land x \neq y.$$
It follows that
$$x \neq y \land \neg P_1(x, y) \equiv x \neq y \land \text{par}[x] \neq x$$
and thus we obtain
$\text{Extend}: x := u; \ y := v;$
while $x \neq y \land \text{par}[x] \neq x$ do
\begin{align*}
x &:= \text{par}[x]; \\
&\text{if } x < y \text{ then } x, y := y, x \text{ fi} \\
&\text{od}; \\
&\text{if } x \neq y \text{ then } \text{par}[x] := y \text{ fi}.
\end{align*}
Since $V$ is finite, it is easy to see that the loop terminates.
The efficiency of algorithm $A$ can be improved considerably by path compression, i.e., by extending the final $\text{then}$ branch of $\text{Extend}$ with assignments $\text{par}[z] := y$ for all nodes $z$ on the $\text{par}$ paths of $u$ and $v$. This optimization preserves all invariants. A simple version of it only adds $\text{par}[u] := y$ and $\text{par}[v] := y$. In our application this seems to be just as effective.
3. Distribution
In this section, we distribute algorithm $A$ over a system of sequential processes that communicate by message passing. We use the following convention with respect to private variables. If $x$ is a private variable of process $p$, we refer to it as $x$ in the code and as $x.p$ if $p$ is not obvious from the context. Let $Process$ be the set of
processes. We assume that the set \( V \) is distributed over the processes by means of a function \( \text{owner}: V \rightarrow \text{Process} \). We assume that process \( p \) is allowed to inspect and modify \( \text{par}[x] \) if and only if \( p = \text{owner}(x) \).
We assume that \( edlis \) is distributed over the processes as well. So, each process \( p \) has its own set \( edlis(p) \) and we regard \( edlis \) as an alias for the union of the sets \( edlis(p) \). We introduce the invariant
\[
J_3: \quad (m, n) \in edlis(q) \Rightarrow \text{owner}(m) = q .
\]
Since the loop in \( \text{Extend} \) can only be executed by process \( p \) as long as \( \text{owner}(x) = p \), we introduce the local search command
\[
\text{Search:} \quad x := u ; \quad y := v ;
\]
\[
\text{while}\ \text{owner}(x) = \text{self} \land x \neq y \land \text{par}[x] \neq x\ \text{do}
\]
\[
x := \text{par}[x] ;
\]
\[
\text{if}\ x < y\ \text{then}\ x, y := y, x\ \text{fi}
\]
\[
\text{od} ,
\]
where \( \text{self} \) stands for the executing process. Since the guards are evaluated from left to right, \( \text{par}[x] \) is not inspected if \( \text{owner}(x) \neq \text{self} \). Execution of \( \text{Search} \) establishes the postcondition
\[
\text{owner}(x) \neq \text{self} \lor x = y \lor \text{par}[x] = x.
\]
It is now clear that each process should repeatedly execute
\[
\text{fetch} \ (u, v) \ \text{from} \ \text{edlis}(\text{self}) ;
\]
\[
\text{Search} ;
\]
\[
\text{if}\ x \neq y\ \text{then}
\]
\[
\text{if}\ \text{owner}(x) = \text{self}\ \text{then}\ \text{par}[x] := y
\]
\[
\text{else}\ \text{put}(x, y)\ \text{into} \ \text{edlis}(\text{owner}(x))\ \text{fi}
\]
\[
\text{fi} .
\]
This program fragment preserves \( J_0 \land J_1 \land J_2 \land J_3 \), i.e., indeed, \( J_0, J_1, J_2, J_3 \) are invariants. It terminates for the same reason as in the case of the sequential algorithm.
In this way, the sets \( edlis(p) \) become buffers with one consumer and many producers. Process \( p \) fetches elements from \( edlis(p) \) and other processes may put elements into it. These actions therefore require communication: the last line of this fragment can be read as “send \((x, y)\) to the process that owns \( x \)”.
Since communication is expensive in performance, we partition the set \( edlis(p) \) into two parts \( edlis0(p) \) and \( edlis1(p) \), and assume the invariant \( edlis(p) = edlis0(p) \cup edlis1(p) \) with initially
\[
edlis0(p) = \{(u, v) \in edlis(p) \mid \text{owner}(v) = p\} ,
\]
\[
edlis1(p) = \{(u, v) \in edlis(p) \mid \text{owner}(v) \neq p\} .
\]
We can therefore treat \( edlis0(p) \) in an initial program fragment A0, obtained from A by substituting \( edlis0(p) \) for \( edlis \).
\[
A0: \quad \text{while } edlis0(self) \neq \text{empty do} \\
\quad \text{fetch } (u,v) \text{ from } edlis0(self); \\
\quad \text{Extend} \\
\quad \text{od}.
\]
Since initially \( \text{par}[z] = z \) for all nodes \( z \), fragment A0 preserves the invariant that \( \text{owner}(\text{par}[z]) = p \) for all \( z \) with \( \text{owner}(z) = p \).
During the treatment of \( edlis1(p) \), process \( p \) must be able to put elements into \( edlis1(q) \) where \( q \) is some process with \( q \neq p \). As a consequence, process \( p \) must not stop when its set \( edlis1(p) \) is empty since other processes may insert new elements in \( edlis1(p) \). We declare for each process a private variable \( \text{continue} \) to indicate that new pairs yet may arrive.
\[
A1: \quad \text{while } \text{continue} \text{ do} \\
\quad \text{fetch } (u,v) \text{ from } edlis1(self); \\
\quad \text{Search} ; \\
\quad \text{if } x \neq y \text{ then} \\
\quad \quad \text{if } \text{owner}(x) = \text{self} \text{ then } \text{par}[x] := y \\
\quad \quad \quad \text{else put}(x,y) \text{ into } edlis1(\text{owner}(x)) \text{ fi} \\
\quad \quad \text{fi} \\
\quad \text{od}.
\]
The program for process \( p \) now becomes the composition of the two parts A0 and A1. Part A0 needs no further refinement. Part A1 primarily requires termination detection: how to give the boolean variables \( \text{continue} \) the adequate values?
We assume that process \( p \) starts up with initial values for \( edlis0(p) \) and \( edlis1(p) \). The size of the union of the sets \( edlis1(p) \) only shrinks. Every process can terminate when all sets \( edlis(p) \) are empty and each process has finished its local computation, but not earlier. To keep track of the edges that have yet to be treated, we attach a unique token \( t \) to each edge \((u,v)\) in \( edlis1(p) \). This token serves to indicate the originator of the pair \((u,v)\) for the sake of termination detection. It is sent unmodified with the changing edge \((u,v)\) as a message \( \text{edge}(u,v,t) \). When no triple is sent, the token \( t \) is destroyed.
Each token shall belong to the process that creates it. We represent the assignment of tokens to processes by a function \( \text{origin}: \text{Token} \rightarrow \text{Process} \). Each process gets a private integer variable \( \text{ctok} \) to count its number of outstanding tokens. Whenever a token is destroyed, a message \( \text{down} \) is sent to its origin. A process decrements \( \text{ctok} \) when it receives a message \( \text{down} \). We thus have the invariant that \( \text{ctok} \) of process \( q \) is the number of messages \( \text{edge}(u,v,t) \) in transit with \( \text{origin}(t) = q \) plus the number of
down messages in transit to \( q \). This can be expressed by
\[
J4: \quad \text{ctok}:q = \#\{\text{edge}(u,v,t) | \text{origin}(t) = q\} + \text{transit}(\text{down},q),
\]
where we use \( \text{transit}(m,q) \) to denote the number of messages \( m \) in transit to \( q \), and \#\( A \) to denote the number of elements of the set \( A \).
We introduce a message \( \text{stop} \) to signal termination. Indeed, when all tokens of all processes have been destroyed, all buffers are empty and every process may terminate.
In order to decide that all tokens of all processes have been destroyed, we introduce a global counter \( \text{gc} \) for the number of processes that are initializing or have \( \text{ctok} > 0 \). We give one process, say \( \text{adm} \), the additional task to administrate the value of \( \text{gc} \), which initially equals \#\( \text{Process} \). A process that reaches \( \text{ctok} = 0 \), sends a \( \text{gcdown} \) message to \( \text{adm} \). We postulate the invariant that \( \text{gc} \) equals the number of processes \( q \) with \( \text{ctok}:q > 0 \) plus the number of \( \text{gcdown} \) messages in transit, i.e.
\[
J5: \quad \text{gc} = \#\{q | \text{ctok}:q > 0\} + \text{transit}(\text{gcdown},\text{adm}).
\]
When process \( \text{adm} \) receives the message \( \text{gcdown} \) it decrements \( \text{gc} \) and, if \( \text{gc} \) becomes 0, it sends messages \( \text{stop} \) to all processes, as expressed in command \( \text{GcDown} \):
\[
\text{GcDown}: \quad \text{gc} := \text{gc} - 1; \quad \text{if} \quad \text{gc} = 0 \quad \text{then} \quad \text{for all } q \in \text{Process} \quad \text{do send } \text{stop} \quad \text{to } q \quad \text{od} \quad \text{fi}.
\]
A process that receives \( \text{stop} \), sets \( \text{continue} \) to false. This leads to the invariants
\[
J6: \quad \text{continue}:q \equiv \text{gc} > 0 \lor \text{transit}(\text{stop},q) > 0; \\
J7: \quad \text{transit}(\text{stop},q) > 0 \Rightarrow \text{gc} = 0.
\]
Fragment A1 is now replaced by
A1: \[
\text{Init1} ; \\
\text{while } \text{continue} \quad \text{do} \\
\text{in } \text{edge}(u,v,t) \rightarrow \text{Search} ; \\
\text{if } x \neq y \land \text{owner}(x) \neq \text{self} \quad \text{then} \\
\text{send } \text{edge}(x,y,t) \quad \text{to } \text{owner}(x) \\
\text{else} \\
\text{if } x \neq y \quad \text{then } \text{par}[x] := y \quad \text{fi} ; \\
\text{send } \text{down} \quad \text{to } \text{origin}(t) ; \\
\text{fi} \\
\text{down} \rightarrow \text{ctok} := \text{ctok} - 1 ;
\]
The auxiliary command Init1 is treated below. The rest of A1 is a repetition that consists of reception and treatment of messages. For this purpose, we use a variation of the in...ni construct of the language SR of [4]. It involves waiting for the next message to arrive, the choice according to the arriving message, and it introduces formal parameters for the arguments of the message, if any. Note that this code implies that a process may send asynchronous messages to itself. Such messages can easily be eliminated. We have not done so for the sake of uniformity.
After the treatment of edge(u,v,t), the process may perform path compression along the two paths it has investigated in its own part of the graph. In view of the communication overhead, we decided not to consider more extensive forms of path compression.
For the sake of uniformity, the initialization of A1 translates the edges in edlis1 into edge messages from the process to itself. A1 is thus initialized by
\[
\text{Init1: } \quad \text{ctok} := 0 ; \\
\text{continue} := \text{true} ; \\
\text{for all } (x, y) \in \text{edlis1}(\text{self}) \text{ do} \\
\quad \text{create a token } t \text{ with } \text{origin}(t) = \text{self} ; \\
\quad \text{ctok} := \text{ctok} + 1 ; \\
\quad \text{send } \text{edge}(x, y, t) \text{ to self} \\
\text{od} ; \\
\text{if } \text{ctok} = 0 \text{ then send gcdown to adm fi} .
\]
In order to verify the invariants, we first need to describe the execution model. Processes are concurrently allowed to receive and execute messages. Since the effect of execution of a message only depends on the message and the precondition of the accepting process, we may (for the sake of the correctness proof) assume that the messages are accepted by the processes in some linear order and that a message is accepted only when the command associated to the previous message has been executed completely by the previous accepting process. In other words, in our model, the acceptance of a message includes atomic execution of the associated command. The invariants are predicates that are supposed to hold before and after each complete acceptance of a message.
It is now straightforward to verify the invariants J4, J5, J6, and J7. Indeed, each of these predicates holds when all processes have completed Init1. Acceptance of a message edge leads to re-sending of a message edge or down. Therefore, J4 is preserved. Acceptance of down by process p preserves J4 since ctok.p is decremented. It also preserves J5, since gcdown is sent if ctok.p reaches 0. Acceptance of gcdown
by process adm preserves $J_5$ since $gc$ is decremented. It also preserves $J_6$ and $J_7$ since $stop$ is sent if and only if $gc$ reaches 0. Acceptance of $stop$ by process $p$ preserves $J_6$ since $continue. p$ is set to false and $J_7$ implies $gc = 0$.
It follows from $J_4 \land J_5$ that, while there are edges to be processed, we have $gc > 0$, so that $J_6$ implies that all processes have not yet terminated. On the other hand, when there are no messages in transit, then $J_4 \land J_5$ implies that $gc = 0$, so that $J_6$ implies $\neg continue. q$ for all processes $q$. So, then, all processes have terminated.
4. Bounded shared memory
We now assume that the processes communicate by means of shared memory, and that the size of this memory is bounded. We use the convention that shared variables are in typewriter font. In this section we specify the requirement on atomicity and synchronization by means of atomicity brackets and $await$ statements. The next section is devoted to the implementation of these constructs by means of the POSIX thread primitives.
We eliminate the messages $edge$, $down$ and $stop$, and replace them by procedures $PutEdge$, $Down$, and $Stop$. The edge triples that are to be communicated between the processes will be placed somewhere in the shared memory. Each process is equipped with a private list of such triples and has a private variable $head0$ that serves as the head of this list. The private list is empty if $head0 = nil$. Procedure $GetEdge$ fetches a triple from the private list.
We introduce a procedure $AwaitEdge$, the postcondition of which implies that $head0 \neq nil$ or $Stop$ has been called. Then program fragment A1 is replaced by
\begin{verbatim}
A2: Init2 ()
loop
AwaitEdge ()
if head0 = nil then exitloop fi
GetEdge (u,v,t)
Search
if x \neq y \land owner(x) \neq self then
PutEdge (owner(x), x, y, t)
else
if x \neq y then par[x] := y fi
Down (origin(t))
fi
endloop
\end{verbatim}
In order to replace the messages $down$ by a procedure $Down$, we replace the private variables $ctok$ by a shared variable
\begin{verbatim}
ctok: array Process of Integer
\end{verbatim}
and we define
```
procedure Down (q: Process) =
var b: Boolean;
⟨ctok[q] := ctok[q] − 1; b := (ctok[q] = 0)⟩;
if b then GcDown () fi
end.
```
Here, atomicity brackets ⟨⟩ are used to specify that the command enclosed by them shall be executed without interference. Now GcDown is a procedure given by
```
procedure GcDown () =
var b: Boolean;
⟨gc := gc−1; b := (gc = 0)⟩;
if b then Stop () fi
end.
```
We replace the private variables continue by a shared array
```
cntu: array Process of Boolean;
```
with initially cntu[q] = true for all processes q. We then define procedure Stop by
```
procedure Stop () =
for all q ∈ Process do ⟨cntu[q] := false⟩ od
end.
```
Note that, in this way, the special process adm is eliminated.
**Remark.** One could of course replace the array cntu by a single boolean variable. This would cause memory contention, however, when many processes try to access it concurrently. We therefore prefer to use an array.
We finally come to the central problem of a shared data structure where the processes can deposit the edges destined for other processes. For this purpose, we assume that there is a constant $M$ such that $\#\text{disl}(p) \leq M$ for all processes $p$. Let $N$ be the number of processes. It then follows that we need at most $N \times M$ tokens. We thus define the type $\text{Token} = [0 \ldots N \times M − 1]$ and use this type as the index set for the messages. We decide to store the triple $(x, y, t)$ always at index $t$ by means of the shared variable
```
pair: array Token of $V \times V$.
```
The message buffers are constructed as lists of pairs. For this purpose, we introduce a value $\text{nil} \notin \text{Token}$ to designate the empty list and declare the shared variables
```
next: array Token of Token ∪ {nil} ;
head: array Process of Token ∪ {nil} ;
```
with initially head[q] = nil for all q. We use head[q] as the head of the list for process q where other processes can write. Now procedure PutEdge is given by
```
procedure PutEdge (q : Process; x, y : V; t : Token) =
pair[t] := (x, y);
⟨ next[t] := head[q] ;
head[q] := t ⟩
end.
```
Reading and writing of head[q] must be done under mutual exclusion. The assignment to pair[t] is not threatened by interference, however, since we preserve the invariant that there is always at most one process that holds token t.
Recall that every process also has a private variable head0 as the head of a private list of tokens. A process fetches an element from its private list by the simple procedure
```
procedure GetEdge (var x, y : V; var t : Token) =
t := head0 ;
head0 := next[t] ;
(x, y) := pair[t]
end.
```
In procedure AwaitEdge, the two lists of a process are swapped whenever the private list is empty and the public one is not:
```
procedure AwaitEdge () =
if head0 = nil then
⟨ await head[self] ≠ nil ∨ ¬ cntu[self] then
head0 := head[self] ;
head[self] := nil ⟩
fi
end.
```
Here we use an atomic await statement as described in (e.g.) [3,5]. Note that, indeed, AwaitEdge has the postcondition that head0 ≠ nil if cntu[self] holds.
We assume that processes are numbered from Process = [0..N-1]. We distribute the tokens according to the rule that process p gets the tokens t with p*M ≤ t < (p+1)*M. It follows that function origin is given by origin(t) = t div M. Then the initialization is given by
```
procedure Init2 () =
var t := self * M ;
head0 := nil ;
for all (x, y) ∈ edlis1(self) do
next[t] := head0 ;
head0 := t ;
pair[t] := (x, y) ;
```
\[ t := t + 1 \]
\[ \text{od} ; \]
\[ \text{ctok}[self] := t - self \times M ; \]
\[ \text{if } \text{ctok}[self] = 0 \text{ then } \text{GcDown} () \text{ fi} \]
\end .
Here the assignments to ctok are not threatened by interference with Down, since the tokens from process q are not yet available to other processes. The use of two lists for every process enables us to treat the initialization of the processes as a private activity.
5. Using mutexes and condition variables
In this section, we implement the atomicity brackets and the await statement introduced in the previous section by means of mutexes and condition variables as specified in the POSIX thread standard, cf. [6,10].
Mutexes serve to implement the atomicity brackets \( \langle \rangle \). A mutex can be regarded as a record with a single field owner of type Process; \( m.\text{owner} = \bot \) means that the mutex is free. The commands to lock and unlock a mutex \( m \) are given by
\[
\text{lock}(m): \langle \text{await } m.\text{owner} = \bot \text{ then } m.\text{owner} := \text{self} \rangle ;
\]
\[
\text{unlock}(m): \langle \text{await } m.\text{owner} = \text{self} \text{ then } m.\text{owner} := \bot \rangle .
\]
The description of unlock is maybe slightly surprising: it enforces that only the owner of the lock is able to unlock it. A thread that tries to unlock a mutex it does not own, has to wait indefinitely. For every mutex \( m \), we use the initialization \( m.\text{owner} = \bot \).
The commands lock and unlock are abbreviations of the POSIX primitives \texttt{pthread_mutex_lock} and \texttt{pthread_mutex_unlock}.
After this preparation we come back to the synchronization of the program of the previous section. In order to allow maximal concurrency, we introduce several mutexes and arrays of mutexes for the protection of specific atomic regions. We introduce a mutex \( \text{mtok}[q] \) to protect \( \text{ctok}[q] \) and a mutex \( \text{mgc} \) to protect \( \text{gc} \). We thus declare
\[
\text{mtok: array Process of Mutex ;}
\]
\[
\text{mgc: Mutex .}
\]
The procedures Down and GcDown become
\[
\text{procedure Down (q : Process) =}
\]
\[
\quad \text{var } b : \text{Boolean} ;
\]
\[
\quad \text{lock (mtok[q]);}
\]
\[
\quad \text{ctok[q] := ctok[q] - 1; } \quad b := (\text{ctok[q] = 0});
\]
\[
\quad \text{unlock (mtok[q]);}
\]
\[
\quad \text{if } b \text{ then GcDown () fi}
\]
\end .
procedure GcDown () =
var b : Boolean ;
lock (mgc);
gc := gc − 1 ; b := (gc = 0) ;
unlock (mgc);
if b then Stop () fi
end .
We use condition variables for the implementation of the await construct in Await Edge. A variable v of type Condition is the name of a list Q(v) of threads that are waiting for a signal. We only use the POSIX primitives pthread_cond_wait and pthread_cond_signal, abbreviated by wait and signal. These primitives are expressed by
\[
\text{wait} (v,m): \langle \text{unlock (m)}; \text{insert self in Q(v)} \rangle ; \text{lock (m)}.\]
Command wait consists of two atomic commands: to start waiting and to lock when released. Note that a thread must own the mutex to execute wait.
Command signal (v) is equivalent to skip if Q(v) is empty. Otherwise, it releases at least one thread waiting at Q(v). This is expressed in
\[
\text{signal} (v): \langle \text{if not isEmpty (Q(v)) then release some threads from Q(v)} \rangle .\]
Notice that, when some thread signals v and thus releases a waiting thread, the latter need not be able to (immediately) lock the mutex. Some other thread may obtain the mutex first.
Back to the program. We introduce a mutex gate[q] to protect head[q] and cntu[q] in the procedures AwaitEdge, PutEdge, and Stop. We introduce a condition variable cv[q] to signal process q that the condition it may be waiting for has been established. We thus declare
\[
gate: \text{array Process of Mutex} ;
\]
\[
cv: \text{array Process of Condition} .\]
The procedures PutEdge and Stop are translations of their counterparts in Section 4 extended with signals to the possible waiting processes.
\[
\text{procedure PutEdge (q : Process; x, y : V; t : Token)} =
\text{pair[t] := (x,y) ;}
\text{lock (gate[q]) ;}
\text{next[t] := head[q] ;}
\text{head[q] := t ;}
\]
signal (cv[q]) ;
unlock (gate[q])
end .
procedure Stop () =
for all \( q \in \text{Process} \) do
lock (gate[q]) ;
cntu[q] := false ;
signal (cv[q]) ;
unlock (gate[q])
od
end .
AwaitEdge is implemented in
procedure AwaitEdge () =
if head0 = nil then
lock (gate[self]) ;
if head[self] = nil \&\& cntu[self] then
wait(cv[self], gate[self])
fi ;
head0 := head[self] ;
head[self] := nil ;
unlock (gate[self])
fi
end .
Note that, here, at most one process can be waiting at any condition variable. So, there
is no danger that a signal releases more than one thread. On the other hand, since the
waiting process is the only process that can invalidate it, the wait condition need not
be tested again.
Remark. If one removes the lock and unlock in Stop, the program becomes incorrect,
since then a process, say \( q \), may observe that the guard in AwaitEdge holds true and
another process may falsify cntu[q] and signal cv[q] before \( q \) starts waiting.
It is also possible to implement the await construct in AwaitEdge by means of a
split binary semaphore, see e.g. [3].
6. Harvest
After execution of algorithm A or its shared memory version, the connected com-
ponents of the graph are determined by the function root. We collect this result in a
separate array
\texttt{root: array }V\texttt{ of }V.\texttt{)
In view of invariant }J_1\texttt{, a sequential algorithm to do this is
\texttt{B: for all }n \in V \texttt{ do in increasing order
if }\texttt{par}[n] = n \texttt{ then
root}[n]: = n ;
\texttt{else root}[n]: = root[par[n]] \texttt{ fi}
\texttt{od .
In this way, the connected components of the graph are characterized by a unique representing element, the root of the }\texttt{par}\texttt{ tree. Loop B is very efficient, of order }O(\#V)\texttt{.
When the graph is very large, distributed harvesting may be indicated. To enable this, we decide that in harvest time all processes are allowed to inspect array }\texttt{par},\texttt{ but inspections and updates of }\texttt{root}[n]\texttt{ are only allowed for the owner of node }n\texttt{. We therefore write }V(p)\texttt{ to denote the set of nodes }n \in V\texttt{ with }\texttt{owner}(n) = p\texttt{ and we let the processes perform
\texttt{C: for all }n \in V(\texttt{self}) \texttt{ do in increasing order
if }\texttt{par}[n] \in V(\texttt{self}) \texttt{ then
if }\texttt{par}[n] = n \texttt{ then root}[n]: = n
\texttt{else root}[n]: = root[par[n]] \texttt{ fi
\texttt{else
r: = par[n] ;
while }r \neq \texttt{par}[r] \texttt{ do }r: = \texttt{par}[r] \texttt{ od ;
root}[n]: = r
\texttt{fi
\texttt{od .
Fragment C has the inefficiency that root paths that leave }V(p)\texttt{ may be traversed repeatedly. We therefore introduce the following optimization. For each process, we declare a private variable }\texttt{outList}\texttt{ of the type list of nodes with the invariant
\texttt{J8: }x \in V(p) \land \texttt{par}[x] \notin V(p) \implies x \in \texttt{outList}.p .\texttt{
We take }\texttt{outList}.p\texttt{ to be empty initially. Predicate }J_8\texttt{ is preserved by program fragment A0. In order to preserve }J_8\texttt{ during A1 and A2, we now let the assignments }\texttt{par}[x] := y\texttt{ in A1 and A2 be accompanied by the instruction to add }x\texttt{ to the private }\texttt{outList}\texttt{.
We now first set all values of }\texttt{root}\texttt{ to some reserved value }\bot\texttt{ and then determine the roots of the elements of }\texttt{outList}.
\texttt{D: for all }n \in V(\texttt{self}) \texttt{ do root}[n]: = \bot \texttt{ od ;
for all }n \in \texttt{outList} \texttt{ do
r: = n ;
while \( r \neq \text{par}[r] \) do \( r := \text{par}[r] \) od ;
\( \text{root}[n] := r \) ;
\od.
After this loop, all points \( x \in V(p) \) with \( \text{par}[x] \notin V(p) \) have the correct value for \( \text{root}[x] \), while the other points \( x \in V(p) \) have \( \text{root}[x] := \bot \). These remaining points of \( V(p) \) are treated in the following loop:
E: for all \( n \in V(\text{self}) \) do in increasing order
if \( \text{root}[n] = \bot \) then
if \( \text{par}[n] = n \) then \( \text{root}[n] := n \)
else \( \text{root}[n] := \text{root[par[n]]} \) fi
fi
\od.
Here, we use the invariant \( J1 \). Since the points are treated in increasing order, we have the invariant that \( \text{root}[x] \) has the final value for all \( x < n \). This property is preserved by the body of loop E because of \( J1 \). The composition \( D \times E \) is our final version of the harvest. This version is more efficient than version C, since only the root paths of points in \( \text{outList} \) are followed completely.
The list \( \text{outList} \) can be implemented most easily as a stack with maximal size equal to the number of boundary points of the set \( V(p) \). Every element of \( \text{outList} \) is an ancestor of a point of the boundary of \( V(p) \), with all intermediate points within \( V(p) \). This implies that \#\( \text{outList} \) is bounded by the number of elements of the boundary of \( V(p) \).
7. Application to image processing
In this section we focus on the application to image processing. We first consider a grey-scale image represented by a two-dimensional integer-valued array \( \text{im}[H,W] \) (later we will consider three-dimensional ‘images’ as well), where \( H \) and \( W \) are the height and the width of the image, respectively. The first coordinate \( (x) \) denotes the row number (scan line), while the second coordinate \( (y) \) denotes the number of the column. Since grey-scale images are discretizations of real black-and-white photographs there is an implicit underlying grid. We consider the case of 4-connectivity, meaning that pixels (except for boundary pixels) have four neighbours (north, east, south, west). Two neighbouring pixels that have the same image value, are considered to be in the same connected component. So, the graph considered has the pixels as nodes, and two pixels are connected if they are neighbours and have the same image value.
Since the graph under consideration is rectangular, we can distribute it over the \( N \) processes by splitting it in equally sized slices. We have decided to distribute the image on the last coordinate of a pixel. It follows that we split the image in (almost) equally sized vertical slices. The test \( (x,y) \in V(p) \) becomes \( lwb(p) \leq y < lwb(p+1) \), where
\( lwb \) is given by
\[
lwb(p) = (p \cdot W) \div N.
\]
It follows that the corresponding function \( \text{owner} \) satisfies
\[
\text{owner}(y) = ((N \cdot (y + 1)) - 1) \div W.
\]
The parallel algorithm consists of three phases. In the first phase, algorithm \( A0 \) is applied on the image slices. This is performed in a scan-line fashion, in which for pixel \((x, y)\) only the pixels \((x - 1, y)\) (north) and \((x, y - 1)\) (west) need to be inspected.
In the middle phase, algorithm \( A2 \) is applied to edges of the graph which cross the boundaries of the distribution. In view of invariant \( J_2 \), the list \( \text{edlis1}(p) \) must contain the pixel pairs \((x, y), (x, y - 1)\) with \( y = lwb(p) \) for which \( \text{im}[x, y] = \text{im}[x, y - 1] \). It follows that \( H \) is an upper-bound for the length of the edge lists \( \text{edlis1}(p) \). We can therefore take \( M \) of Section 4 to be equal to \( H \).
Since we deal with images, a very effective optimization can be used to reduce the sizes of the buffers \( \text{edlis1}(p) \). Indeed, we need not insert the pair \((x, y), (x, y - 1)\) into \( \text{edlis1}(p) \), if this list already contains the pair \((x - 1, y), (x - 1, y - 1)\) while also \( \text{im}[x, y] = \text{im}[x - 1, y] \). Indeed, if this is the case, the pair consists of pixels connected already, and we can therefore disregard the new edge. Experiments have shown that for camera made images this optimization often reduces the size of the buffers significantly. The optimization is used in the initialization \( \text{Init2} \), while the remainder of \( A2 \) is left unmodified. In the final phase, we use the harvesting routine \((D; E)\) to compute the output image.
The algorithm is easily adapted to ‘images’ of higher dimensionality. Apart from choosing another distribution, indexing, and the corresponding functions \( lwb \), \( \text{owner} \) and \( \text{origin} \), no modifications are necessary. We applied the algorithm to a three-dimensional CT-scan data set \( \text{im}[D, H, W] \), where \( D \) is the number of 2-D image slices (depth) of the data set. We used the same functions \( lwb \) and \( \text{owner} \). In this case, the bound \( M \) of the sizes of the lists \( \text{edlis1} \) is \( D \cdot H \).
7.1. Practical results
We applied the shared memory version of the algorithm on a set of seven 2-D test images. We had the availability of two shared memory architectures, namely a Cray J90 vector computer consisting of 32 processors and 4 Gb shared memory, and a Compaq ES40 with 4 Alpha-processors and 1 Gb shared memory. The processors of the Cray J90 computer are shared with other users, and the scheduling of these is done by the operating system, without any control to the user. It is therefore almost impossible to acquire 32 processors without interference by other users. For this reason, we decided to do time measurements up to 16 processors, which turned out to be reasonably available. Each measurement was performed a 100 times, of which the 25 best and the 25 worst measurements were discarded. The remaining 50 measurements were averaged. This
Table 1
Absolute timings in milliseconds on a single CPU for different images sizes
<table>
<thead>
<tr>
<th>Image</th>
<th>ES40</th>
<th>CRAY J90</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>256</td>
<td>512</td>
</tr>
<tr>
<td>empty</td>
<td>10</td>
<td>42</td>
</tr>
<tr>
<td>vline</td>
<td>7</td>
<td>30</td>
</tr>
<tr>
<td>hline</td>
<td>6</td>
<td>28</td>
</tr>
<tr>
<td>comb</td>
<td>7</td>
<td>30</td>
</tr>
<tr>
<td>squares</td>
<td>10</td>
<td>42</td>
</tr>
<tr>
<td>music</td>
<td>10</td>
<td>42</td>
</tr>
<tr>
<td>CT</td>
<td>9</td>
<td>42</td>
</tr>
</tbody>
</table>
Fig. 1. Test images: (a) squares (b) music (c) CT.
way we hope to get a reasonable measurement. On the Compaq ES40, measurements were performed simply 50 times and averaged immediately, since we were the only user on the system. The absolute timings on a single CPU are shown in Table 1. Note that the ES40 performs much better than the Cray. One may realize that the design of the Cray is more than 5 years older than that of the ES40, and that the Cray is a typical vector processor, which is not of any use in our algorithm. Besides, the Compaq has a cache memory on each processor of 512 kB, while the Cray has no cache whatsoever.
The image named empty is a trivial image of which all pixels have the same grey value. The image vline is an image for which pixels \( im[x, y] = 1 \) if \( y \) is even, and \( im[x, y] = 0 \) if \( y \) is odd. The image hline is the image vline rotated over 90 degrees. The image comb is similar to the image vline except that the pixels on the last scanline have grey value 1, i.e. \( im[H - 1, y] = 1 \). Clearly, these images are artificial images. We also used some more realistic images, which are shown in Fig. 1. The first image consists of 50 squares of random sizes, located at random positions. Each square has a unique grey value. The second image is a camera-made image of handwritten music. The third image is slice 50 of a 93 \( \times \) 256 \( \times \) 256 CT-scan of a head. The number of grey values is reduced from 256 to 32 to reduce the influence of noise.
Table 2
Speedups for the test set on the ES40
<table>
<thead>
<tr>
<th>Image</th>
<th>256 × 256</th>
<th>512 × 512</th>
<th>1024 × 1024</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>S₂</td>
<td>S₃</td>
<td>S₄</td>
</tr>
<tr>
<td>empty</td>
<td>1.7</td>
<td>2.0</td>
<td>2.3</td>
</tr>
<tr>
<td>vline</td>
<td>1.7</td>
<td>1.8</td>
<td>2.4</td>
</tr>
<tr>
<td>hline</td>
<td>1.4</td>
<td>1.4</td>
<td>1.3</td>
</tr>
<tr>
<td>comb</td>
<td>1.7</td>
<td>1.8</td>
<td>1.9</td>
</tr>
<tr>
<td>squares</td>
<td>1.7</td>
<td>2.0</td>
<td>2.2</td>
</tr>
<tr>
<td>music</td>
<td>1.5</td>
<td>1.7</td>
<td>1.8</td>
</tr>
<tr>
<td>CT</td>
<td>1.6</td>
<td>1.8</td>
<td>2.0</td>
</tr>
</tbody>
</table>
Table 3
Speedups for the test set on the Cray J90
<table>
<thead>
<tr>
<th>Image</th>
<th>256 × 256</th>
<th>512 × 512</th>
<th>1024 × 1024</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>S₂</td>
<td>S₄</td>
<td>S₈</td>
</tr>
<tr>
<td>empty</td>
<td>1.9</td>
<td>3.5</td>
<td>5.7</td>
</tr>
<tr>
<td>vline</td>
<td>2.0</td>
<td>4.0</td>
<td>6.9</td>
</tr>
<tr>
<td>hline</td>
<td>1.8</td>
<td>3.3</td>
<td>4.5</td>
</tr>
<tr>
<td>comb</td>
<td>1.9</td>
<td>3.4</td>
<td>5.2</td>
</tr>
<tr>
<td>squares</td>
<td>2.0</td>
<td>4.0</td>
<td>6.4</td>
</tr>
<tr>
<td>music</td>
<td>2.0</td>
<td>3.8</td>
<td>5.9</td>
</tr>
<tr>
<td>CT</td>
<td>1.9</td>
<td>2.8</td>
<td>4.4</td>
</tr>
</tbody>
</table>
For the artificial images, path compression is extremely effective. For the more realistic images path compression is worthwhile, but is less effective. For these images, it turns out that the running time of the algorithm is hardly dependent on the content of the image. For most camera made images, the algorithm runs in approximately the same time.
In Tables 2 and 3, we see the speedup using more than one processor. The measurements are performed on the test set for different image sizes. The number $S_N$ is the speedup of the algorithm running on $N$ processors relative to execution on one processor, defined by $S_N = T_1/T_N$, where $T_N$ is the running time on $N$ processors. We clearly see, that the speedup gets better if the computational task size increases. This is to be expected, since the ratio between computation and communication gets in favour of the computational side. This effect is especially severe on the ES40, since its processors are much faster than those of the Cray, while the memory speed (and thus communication speed) is about the same.
On both machines we see that the image vline performs best. Again, this is to be expected, since there is no communication needed at all. The image hline on the other hand performs worst, since here the amount of communication is maximal among the images considered. Even for this case, however, the speedups are satisfactory. The
Table 4
Speedups for the 3-D CT data set
<table>
<thead>
<tr>
<th>N</th>
<th>SN</th>
<th>N</th>
<th>SN</th>
<th>N</th>
<th>SN</th>
<th>N</th>
<th>SN</th>
<th>N</th>
<th>SN</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>1.8</td>
<td>3</td>
<td>2.4</td>
<td>4</td>
<td>3</td>
<td>3.1</td>
<td>4</td>
<td>3.8</td>
<td>7</td>
</tr>
<tr>
<td>3</td>
<td>2.0</td>
<td>5</td>
<td>2.9</td>
<td>6</td>
<td>2.4</td>
<td>7</td>
<td>2.9</td>
<td>6</td>
<td>2.9</td>
</tr>
<tr>
<td>4</td>
<td>4.6</td>
<td>8</td>
<td>5.6</td>
<td>9</td>
<td>5.6</td>
<td>9</td>
<td>5.6</td>
<td>9</td>
<td>5.6</td>
</tr>
<tr>
<td>5</td>
<td>7.2</td>
<td>7.9</td>
<td>7.2</td>
<td>7.9</td>
<td>7.9</td>
<td>7.9</td>
<td>7.9</td>
<td>7.9</td>
<td>7.9</td>
</tr>
<tr>
<td>11</td>
<td>9.2</td>
<td>12</td>
<td>9.9</td>
<td>13</td>
<td>10.5</td>
<td>16</td>
<td>12.7</td>
<td>16</td>
<td>12.7</td>
</tr>
</tbody>
</table>
image empty gains most from the optimization mentioned above. For this image, the lists edlis initially contain each only a single pair.
For the more realistic images square, music and CT, we see very nice results. This is of course the main goal of the algorithm. For large enough images, up to about 8 processors we see an almost linear speedup. If we add more processors, we see a slight drop in the efficiency as a result of relative increase of communication with respect to the computational task. However, an efficiency of generally more than 75% is very satisfactory.
We also applied the 3-D version of the algorithm to a CT data set with sizes 93 × 256 × 256. In Fig. 1(c), we see slice 50 of this set. The amount of grey values was reduced from 256 to 32 grey values. In Table 4, we present the results for both architectures. The left-hand frame contains the results on the ES40, with $T_1 = 3.1$ s. The right-hand frame contains the results on the Cray J90, with $T_1 = 149$ s. The results show the same tendencies as the two-dimensional results.
8. Conclusion
The computation of the connected components of an image (2-D or 3-D) can effectively be distributed over a number of processors. The amount of communication needed can only be determined at runtime, but is for most natural images quite modest. We used a variation of Tarjan’s connected components algorithm. The communication is based on message passing, but implemented in shared variables by means of POSIX thread primitives. The experiments show a speedup that is often almost linear in the number of processors.
References
|
{"Source-Url": "https://www.rug.nl/research/portal/files/3108187/2001SciCompProgHesselink.pdf", "len_cl100k_base": 14771, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 73828, "total-output-tokens": 17455, "length": "2e13", "weborganizer": {"__label__adult": 0.0003857612609863281, "__label__art_design": 0.0009851455688476562, "__label__crime_law": 0.0004241466522216797, "__label__education_jobs": 0.0012521743774414062, "__label__entertainment": 0.00013780593872070312, "__label__fashion_beauty": 0.00024080276489257812, "__label__finance_business": 0.0003476142883300781, "__label__food_dining": 0.0004963874816894531, "__label__games": 0.0007572174072265625, "__label__hardware": 0.003843307495117187, "__label__health": 0.0009050369262695312, "__label__history": 0.0006237030029296875, "__label__home_hobbies": 0.00024056434631347656, "__label__industrial": 0.0009617805480957032, "__label__literature": 0.0003829002380371094, "__label__politics": 0.0003919601440429687, "__label__religion": 0.0008516311645507812, "__label__science_tech": 0.413818359375, "__label__social_life": 0.00011920928955078124, "__label__software": 0.01015472412109375, "__label__software_dev": 0.56103515625, "__label__sports_fitness": 0.0003681182861328125, "__label__transportation": 0.0009593963623046876, "__label__travel": 0.0002932548522949219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52440, 0.02729]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52440, 0.39573]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52440, 0.8554]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2329, false], [2329, 5704, null], [5704, 8541, null], [8541, 10949, null], [10949, 13028, null], [13028, 15652, null], [15652, 18561, null], [18561, 21134, null], [21134, 23714, null], [23714, 25975, null], [25975, 27808, null], [27808, 29521, null], [29521, 31939, null], [31939, 33760, null], [33760, 35057, null], [35057, 37443, null], [37443, 40258, null], [40258, 43431, null], [43431, 45521, null], [45521, 48405, null], [48405, 51028, null], [51028, 52440, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2329, true], [2329, 5704, null], [5704, 8541, null], [8541, 10949, null], [10949, 13028, null], [13028, 15652, null], [15652, 18561, null], [18561, 21134, null], [21134, 23714, null], [23714, 25975, null], [25975, 27808, null], [27808, 29521, null], [29521, 31939, null], [31939, 33760, null], [33760, 35057, null], [35057, 37443, null], [37443, 40258, null], [40258, 43431, null], [43431, 45521, null], [45521, 48405, null], [48405, 51028, null], [51028, 52440, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52440, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52440, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52440, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52440, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52440, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52440, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52440, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52440, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52440, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52440, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2329, 2], [2329, 5704, 3], [5704, 8541, 4], [8541, 10949, 5], [10949, 13028, 6], [13028, 15652, 7], [15652, 18561, 8], [18561, 21134, 9], [21134, 23714, 10], [23714, 25975, 11], [25975, 27808, 12], [27808, 29521, 13], [29521, 31939, 14], [31939, 33760, 15], [33760, 35057, 16], [35057, 37443, 17], [37443, 40258, 18], [40258, 43431, 19], [43431, 45521, 20], [45521, 48405, 21], [48405, 51028, 22], [51028, 52440, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52440, 0.06572]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
d68bd1982d11fc43e443b038e379da8dcfc60924
|
VMs for Portability: BCPL
2.1 Introduction
BCPL is a high-level language for systems programming that is intended to be as portable as possible. It is now a relatively old language but it contains most syntactic constructs found in contemporary languages. Indeed, C was designed as a BCPL derivative (C can be considered as a mixture of BCPL and Algol68 plus some \textit{sui generis} features). BCPL is not conventionally typed. It has one basic data type, the machine word. It is possible to extract bytes from words but this is a derived operation. All entities in BCPL are considered either to be machine words or to require a machine word or a number of machine words. BCPL supports addresses and assumes that they can fit into a single word. Similarly, it supports vectors (one-dimensional arrays) which are sequences of words (multi-dimensional arrays must be explicitly programmed in terms of vectors of pointers to vectors). Routines (procedures and functions) can be defined in BCPL and are represented as pointers to their entry points. Equally, labels are addresses of sequences of instructions.
BCPL stands for “Basic CPL”, a subset of the CPL language. CPL was an ambitious lexically scoped, imperative procedural programming language designed by Strachey and others in the mid-1960s as a joint effort involving Cambridge and London Universities. CPL contained all of the most advanced language constructs of the day, including polymorphism. There is a story that the compiler was too large to run on even the biggest machines available in the University of London! Even though it strictly prefigures the structured programming movement, BCPL contains structured control constructs (commands) including two-branch conditionals, switch commands, structured loops with structured exits. It also supports statement formulae similar to those in FORTRAN and the original BASIC. Recursive routines can be defined. BCPL does support a goto command. Separate compilation is supported in part by the provision of a “global vector”, a vector of words that contains pointers to externally defined routines. BCPL is lexically scoped. It implements call-by-value semantics for routine parameters. It also permits higher-order
BCPL was intended to be portable. Portability is achieved by bootstrapping the runtime system a number of times so that it eventually implements the compiler's output language. This language is called OCODE. OCODE is similar to a high-level assembly language but is tailored exactly to the intermediate representation of BCPL constructs. OCODE was also defined in such a way that it could be translated into the machine language of most processors. Associated with OCODE is an OCODE machine that, once implemented, executes OCODE, hence compiled BCPL. The implementation of an abstract machine for OCODE is relatively straightforward.
In the book on BCPL [45], Richards and Whitby-Strevens define a second low-level intermediate language called Intcode. Intcode is an extremely simple language that can be used to bootstrap OCODE. More recently, Richards has defined a new low-level bootstrap code called Cintcode. The idea is that a fundamental system is first written for Intcode/Cintcode. This is then used to bootstrap the OCODE evaluator. The definition of the Intcode and Cintcode machines is given in the BCPL documentation. The BCPL system was distributed in OCODE form (more recent versions distribute executables for standard architectures like the PC under Linux). At the time the book was published, an Intcode version of the system was required to bootstrap a new implementation.
The virtual machines described below are intended, therefore, as an aid to portability. The definitions of the machines used to implement OCODE and Intcode/Cintcode instructions include definitions of the storage structures and layout required by the virtual machine, as well as the instruction formats and state transitions.
The organisation of this chapter is as follows. We will focus first on BCPL and its intermediate languages OCODE and Intcode/Cintcode (Cintcode is part of the current BCPL release and access to the documentation is relatively easy). We will begin with a description of the OCODE machine. This description will start with a description of the machine's organisation and then we move on to a description of the instruction set. The relationship between OCODE instructions and BCPL's semantics will also be considered. Then, we will examine Cintcode and its abstract machine. Finally, we explain how BCPL can be ported to a completely new architecture.
### 2.2 BCPL the Language
In this section, the BCPL language is briefly described.
BCPL is what we would now see as a relatively straightforward procedural language. As such, it is based around the concept of the procedure. BCPL provides three types of procedural abstraction:
- Routines that update the state and return no value;
• Routines that can update the state and return a single value;
• Routines that just compute a value.
The first category refers to procedures proper, while the second corresponds to the usual concept of function in procedural languages. The third category corresponds to the single-line functions in FORTRAN and in many BASIC dialects. Each category permits the programmer to pass parameters, which are called by value.
BCPL also supports a variety of function that is akin to the so-called “formula function” of FORTRAN and BASIC. This can be considered a variety of macro or open procedure because it declares no local variables.
BCPL supports a variety of state-modifying constructs. As an imperative language, it should be obvious that it contains an assignment statement. Assignment in BCPL can be simple or multiple, so the following are both legal:
\[
\begin{align*}
x &:= 0; \\
x, y &:= 1, 2;
\end{align*}
\]
It is worth noting that terminating semicolons are optional. They are mandatory if more than one command is to appear on the same line as in:
\[
\begin{align*}
x &:= 0; y := 2
\end{align*}
\]
Newline, in BCPL, can also be used to terminate a statement. This is a nice feature, one found in only a few other languages (Eiffel and Imp, a language used in the 1970s at Edinburgh University).
Aside from this syntactic feature, the multiple assignment gives a clue that the underlying semantics of BCPL are based on a stack.
In addition, it contains a number of branching constructs:
• **IF** . . . **DO**. This is a simple test. If the test is true, the code following the **DO** is executed. If the test is false, the entire statement is a no-operation.
• **UNLESS** . . . **DO**. This is syntactic sugar for **IF NOT** . . . **DO**. That is, the code following the **DO** is executed if the test fails.
• **TEST** . . . **THEN** . . . **ELSE**. This corresponds to the usual if then else in most programming languages.
• **SWITCHON**. This is directly analogous to the **case** statement in Pascal and its descendants and to the **switch** statement in C and its derivatives. Cases are marked using the **CASE** keyword. Cases run into each other unless explicitly broken. There is also a an optional default case denoted by a keyword. Each case is implicitly a block.
In general, the syntax word **do** can be interchanged with **then**. In the above list, we have followed the conventions of BCPL style.
BCPL contains a number of iterative statements. The iterative statements are accompanied by structured ways to exit loops.
---
1 Keywords must be in uppercase, so the convention is followed here.
BCPL has a goto, as befits its age.
BCPL statements can be made to return values. This is done using the pair of commands VALOF and RESULTIS. The VALOF command introduces a block from which a value is returned using the RESULTIS command; there can be more than one RESULTIS command in a VALOF block. The combination of VALOF and RESULTIS is used to return values from functions. The following is a BCPL procedure:
\[
\text{LET Add.Global (x) BE}
\]
\[
\text{($globl := globl + x;$)}
\]
The following is a BCPL functional routine:
\[
\text{LET Global.Added.Val (x) =}
\]
\[
\text{($VALOF ($RESULTIS(x+globl);$);$)}
\]
From this small example, it can be seen that the body of a procedure is marked by the BE keyword, while functional routines are signalled by the equals sign and the use of VALOF and RESULTIS (BCPL is case-sensitive).
BCPL is not conventionally typed. It has only one data type, the machine word, whose size can change from machine to machine. The language also contains operators that access the bytes within a machine word. Storage is allocated by the BCPL compiler in units of one machine word. The language contains an operator that returns the address of a word and an operator that, given an address, returns the contents of the word at that address (dereferencing).
BCPL supports structured types to a limited extent. It permits the definition of vectors (single-dimension arrays of words). It also has a table type. Tables are vectors of words that are indexed by symbolic constants, not by numerical values. In addition, it is possible to take the address of a routine (procedure or function); such addresses are the entry points of the routines (as in C). The passing of routine addresses is the method by which BCPL supports higher-order routines (much as C does).
It also permits the definition of symbolic constants. Each constant is one machine word in length.
BCPL introduces entities using the LET syntax derived from ISWIM. For example, the following introduces a new variable that is initialised to zero:
\[
\text{LET x := 0 IN}
\]
The following introduces a constant:
LET x = 0 IN
Multiple definitions are separated by the **AND** keyword (logical conjunction is represented by the “&” symbol) as in:
LET x := 0
AND y := 0
IN
Routines are also introduced by the **LET** construct.
Variables and constants can be introduced at the head of any block.
In order to support separate compilation and to ease the handling of the runtime library, a **global vector** is supported. This is a globally accessible vector of words, in which the first few dozen entries are initialised by the runtime system (they are initialised to library routine entry points and to globally useful values). The programmer can also assign to the global vector at higher locations (care must be taken not to assign to locations used by the system). These are the primary semantic constructs of BCPL. Given this summary, we can now make some observations about the support required by the virtual machine (the OCODE machine).
### 2.3 VM Operations
The summary of BCPL above was intended to expose the major constructs. The identification of major constructs is important for the design of a virtual machine which must respect the semantics of the language as well as providing the storage structures required to support the language.
At this stage, it should be clear that a BCPL machine should provide support for the primitive operations needed for the manipulation of data of all primitive types. The virtual machine support for them will be in the form of instructions that the machine will directly implement. In BCPL, this implies that the virtual machine must support operations on the word type: arithmetic operations, comparisons and addressing. Byte-based operations can either be provided by runtime library operations or by instructions in the virtual machine; BCPL employs the latter for the reason that it is faster and reduces the size of the library. In addition, BCPL supports vectors on the stack; they must also be addressed when designing an appropriate virtual machine.
The values manipulated by these operations must be stored somewhere: a storage area, particularly for temporary and non-global values must be provided. Operations are required for manipulating this storage area. Operations are also required to load values from other locations and to store them as results. More than one load operation might be required (in a more richly typed language, this might be a necessity) and more than one store operation might be required. It is necessary to look at the cases to determine what is required.
BCPL employs static scoping. The compiler can be relied upon to verify that variables, etc., are not required. Static scoping requires a stack-like mechanism for the storage of variables. The virtual machine is, therefore, built around a stack. Operations are required to allocate and free regions of stack at routine entry and exit; the return of results can also be implemented by means of stack allocation and addressing. The compiler generates instructions that allocate and free the right amount of stack space; it also generates instructions to handle returned values and the adjustment of the stack when routines return. Evaluation of expressions can be performed on the stack, so we now are in a position to define the instructions for data manipulation.
With expressions out of the way, the following families of construct must be handled by the compiler and OCODE instructions generated to implement them:
- Control constructs, in particular, conditionals, iteration, jumps;
- Assignment;
- Routine call and return;
- Parameter passing and value return from routines and valof.
Note that we assume that sequencing is handled implicitly by the compiler.
Control structure is handled, basically, by means of labels and jumps. There are clear translations between most of the control structures and label-jump combinations. The problem cases are FOR and SWITCHON. The former is problematic because it requires counters to be maintained and updated in the right order; the latter because the best implementation requires a jump table.
Assignment is a relatively straightforward matter (essentially, push a value onto the stack and pop it off to some address or other). Multiple assignment is also easy with a stack machine. The values are pushed onto the stack in some order (say left to right) and popped in the reverse order. Thus, the command:
\[
p, q := 1, 2
\]
has the intention of assigning 1 to \( p \) and 2 to \( q \). This can be done by pushing 1, then 2 onto the stack and assigning them in reverse order. An interesting example of multiple assignment is:
\[
p, q := q, p
\]
Swap! It can be handled in exactly the manner just described.
Finally, we have routine calls and VALOF. There are many ways to implement routine calls. For software virtual machines, relatively high-level instructions can be used (although low-level instructions can also be employed). The OCODE machine provides special instructions for handling routine entry and exit, as will be seen.
BCPL is a call-by-value language, so the runtime stack can be directly employed to hold parameter values that are to be passed into the routine.
The VALOF ... RESULTIS combination can be handled in a variety of ways. One is to perform a source-to-source transformation. Another is to use the stack at runtime by introducing a new scope level. Variables local to the VALOF can be allocated on the runtime stack with the stack then being used for local values until the RESULTIS is encountered. An implementation for RESULTIS would be to collapse the stack to the point where the VALOF was encountered and then push the value to be returned onto the stack.
2.4 The OCODE Machine
In this section, the organisation of the OCODE machine is presented. BCPL is a procedural programming language that supports recursion. It requires a globally accessible vector of words to support separate compilation. It also requires a pool of space to represent global variables. The language also permits the use of (one-dimensional) vectors and tables (essentially vectors of words whose elements are indexed by symbolic identifiers, much like tables in assembly language). As a consequence, the OCODE machine must reserve space for a stack to support lexical scope and for recursion. The OCODE machine also needs space to hold the global vector and also needs a space to hold program instructions.

The OCODE machine has three memory regions:
- The Global vector;
- The Stack (this is a framed stack);
- Storage for program code and static data.
The organisation of the OCODE machine is shown in Figure 2.1.
The global vector is used to store all variables declared global in the program. The global vector is a vector of words containing global variables; it also contains the entry points of routines declared in one module that are to be made visible in another. It is pointed to by the G register. The current stack frame is pointed to by the P register. The size of the current stack frame is always known at compilation time, so it need not be represented in code by a register.
There is also a special A register which is used to hold values returned by functions (see below).
Static variables, tables and string constants are stored in the program area. They are referenced by labels which are usually represented by the letter L followed by one or more digits.
The stack holds all dynamic (local) variables.
All variables are of the same size. That is, all variables are allocated the same amount of space in the store. For most modern machines they are 32- or 64-bits in length.
2.5 OCODE Instructions and their Implementation
In OCODE, instructions are represented as integers. Here, we will use only the mnemonic names in the interests of readability. It is important to note that the mnemonic form for instructions and labels must be converted into more fundamental representations when code is emitted by the compiler.
The size of the current stack frame is always known at compile time. When specifying instructions, a variable, S, is used to denote an offset from the start of the current stack frame. This is done only to show how much space is left in the current stack frame by the individual instructions.
When defining abstract machine instructions, an array notation will be employed. Thus, P is considered as a one-dimensional vector. S will still be a constant denoting the size of the current stack frame. Similarly, G will also be considered as an array.
The notation P[S-1] denotes the first free element on the stack.
2.5.1 Expression Instructions
The OCODE instructions that implement expressions do not alter the stack frame size. In the case of unary instructions, the operand is replaced on the top of the stack by the result of the instruction. In the case of binary operations, the stack element immediately beneath the top one is replaced by the result.
The instructions are mostly quite clear. Rather than enter into unnecessary detail, these instructions are summarised in Table 2.1 The table’s middle column is a short English equivalent for the opcode.
Only the first instruction deserves any real comment. It is an instruction that considers the current top-of-stack element as a pointer into memory. It replaces the top-of-stack element by the object that it points to. This is the operation of dereferencing a pointer to yield an r-value.
Table 2.1. OCODE expression instructions.
<table>
<thead>
<tr>
<th>Opcode</th>
<th>Description</th>
<th>Definition</th>
</tr>
</thead>
<tbody>
<tr>
<td>RV</td>
<td>r-value</td>
<td>(P[S-1] := \text{cts}(P[S-1]))</td>
</tr>
<tr>
<td>ABS</td>
<td>absolute value</td>
<td>(P[S-1] := \text{abs}(P[S-1]))</td>
</tr>
<tr>
<td>NEG</td>
<td>unary minus</td>
<td>(P[S-1] := -P[S-1])</td>
</tr>
<tr>
<td>NOT</td>
<td>logical negation</td>
<td>(P[S-1] := \neg(P[S-1]))</td>
</tr>
<tr>
<td>GETBYTE</td>
<td>extract byte</td>
<td>(P[S-2] := P[S-2] \text{ gtb} P[S-1])</td>
</tr>
<tr>
<td>MULT</td>
<td>multiply</td>
<td>(P[S-2] := P[S-2] \cdot P[S-1])</td>
</tr>
<tr>
<td>DIV</td>
<td>divide</td>
<td>(P[S-2] := P[S-2] \div P[S-1])</td>
</tr>
<tr>
<td>REM</td>
<td>remainder</td>
<td>(P[S-2] := P[S-2] \text{ rem} P[S-1])</td>
</tr>
<tr>
<td>PLUS</td>
<td>add</td>
<td>(P[S-2] := P[S-2] + P[S-1])</td>
</tr>
<tr>
<td>MINUS</td>
<td>subtract</td>
<td>(P[S-2] := P[S-2] - P[S-1])</td>
</tr>
<tr>
<td>EQ</td>
<td>equal</td>
<td>(P[S-2] := P[S-2] = P[S-1])</td>
</tr>
<tr>
<td>NE</td>
<td>not equal</td>
<td>(P[S-2] := P[S-2] \neq P[S-1])</td>
</tr>
<tr>
<td>LS</td>
<td>less than</td>
<td>(P[S-2] := P[S-2] < P[S-1])</td>
</tr>
<tr>
<td>GR</td>
<td>greater than</td>
<td>(P[S-2] := P[S-2] > P[S-1])</td>
</tr>
<tr>
<td>LE</td>
<td>(\le)</td>
<td>(P[S-2] := P[S-2] \leq P[S-1])</td>
</tr>
<tr>
<td>GE</td>
<td>(\ge)</td>
<td>(P[S-2] := P[S-2] \geq P[S-1])</td>
</tr>
<tr>
<td>LSHIFT</td>
<td>left shift</td>
<td>(P[S-2] := P[S-2] \ll P[S-1])</td>
</tr>
<tr>
<td>RSHIFT</td>
<td>right shift</td>
<td>(P[S-2] := P[S-2] \gg P[S-1])</td>
</tr>
<tr>
<td>LOGAND</td>
<td>logical and</td>
<td>(P[S-2] := P[S-2] \text{ and} P[S-1])</td>
</tr>
<tr>
<td>LOGOR</td>
<td>logical or</td>
<td>(P[S-2] := P[S-2] \text{ or} P[S-1])</td>
</tr>
<tr>
<td>EQV</td>
<td>bitwise equal</td>
<td>(P[S-2] := P[S-2] \text{ leq} P[S-1])</td>
</tr>
<tr>
<td>NEQV</td>
<td>xor</td>
<td>(P[S-2] := P[S-2] \text{ xor} P[S-1])</td>
</tr>
</tbody>
</table>
Table 2.1 employs a notational convention that needs explanation:
- \(\text{cts}\) is the contents operation (dereferences its argument).
- \(\text{abs}\) is the absolute value of its argument.
- \(\text{gtb}\) is the getbyte operator.
- \(\text{rem}\) is integer remainder after division.
- \(\text{and}\) is logical and (conjunction).
- \(\text{or}\) is logical or (disjunction).
- \(\text{leq}\) is bitwise equivalence.
- \(\text{xor}\) is bitwise exclusive or (logical not-equivalence).
- \(e_1 \ll e_2\) is left shift \(e_1\) by \(e_2\) bits.
- \(e_1 \gg e_2\) is right shift \(e_1\) by \(e_2\) bits.
Other than this, the “description” of each instruction is just an operation on the OCODE stack. In this and the following cases, the code equivalent is
2.5.2 Load and Store Instructions
The load and store instructions, like those for expressions, should be fairly clear. The code equivalents are included in the right-hand column of Table 2.2. Each instruction is described (middle column of the table).
<table>
<thead>
<tr>
<th>Opcode</th>
<th>Description</th>
<th>Definition</th>
</tr>
</thead>
<tbody>
<tr>
<td>LPn</td>
<td>load from P</td>
<td>(P[S] := P[n]; S := S+1)</td>
</tr>
<tr>
<td>LGn</td>
<td>load global</td>
<td>(P[S] := G[n]; S := S+1)</td>
</tr>
<tr>
<td>LL Ln</td>
<td>load label</td>
<td>(P[S] := Ln; S := S+1)</td>
</tr>
<tr>
<td>LL Pn</td>
<td>load address</td>
<td>(P[S] := P[n]; S := S+1)</td>
</tr>
<tr>
<td>LL Gn</td>
<td>load global addr</td>
<td>(P[S] := G[n]; S := S+1)</td>
</tr>
<tr>
<td>LLL Ln</td>
<td>load label addr</td>
<td>(P[S] := Ln; S := S+1)</td>
</tr>
<tr>
<td>SPn</td>
<td>store off P</td>
<td>(P[n] := P[S]; S := S-1)</td>
</tr>
<tr>
<td>SGN</td>
<td>store global</td>
<td>(G[n] := P[S]; S := S-1)</td>
</tr>
<tr>
<td>SL Ln</td>
<td>store at label</td>
<td>(L_n := P[S]; S := S-1)</td>
</tr>
<tr>
<td>LF Ln</td>
<td>load function</td>
<td>(P[S] := \text{entry point } L_n; )</td>
</tr>
<tr>
<td></td>
<td></td>
<td>(S := S+1)</td>
</tr>
<tr>
<td>LNNn</td>
<td>load constant</td>
<td>(P[S] := n; S := S+1)</td>
</tr>
<tr>
<td>TRUE</td>
<td>true</td>
<td>(P[S] := \text{true}; S := S+1)</td>
</tr>
<tr>
<td>FALSE</td>
<td>false</td>
<td>(P[S] := \text{false}; S := S+1)</td>
</tr>
<tr>
<td>LSTR</td>
<td>load string</td>
<td>(P[S] := "C_1 \ldots C_n" ); (S := S+1)</td>
</tr>
<tr>
<td>STIND</td>
<td>store index</td>
<td>(\text{cts}(P[S-1]) := P[S-2]; S := S-2)</td>
</tr>
<tr>
<td>PUTBYTE</td>
<td>put byte</td>
<td>(\text{setbyte}(P[S-2], P[S-1]) := P[S-3]; S := S-3)</td>
</tr>
</tbody>
</table>
There is an instruction not included in Table 2.2 that appears in the OCODE machine specification in [44]. It is the \texttt{QUERY} instruction. It is defined as:
\[
P[S] := ?; S := S+1
\]
Unfortunately, [44] does not contain a description of it. The remaining instructions have an interpretation that is fairly clear and is included in the table. It is hoped that the relatively brief description is adequate.
2.5.3 Instructions Relating to Routines
This class of instruction deals with routine entry (call) and return. When it compiles a routine, the OCODE compiler generates code of the following form:
ENTRY \text{Li n C1} \ldots \text{Cn}
\text{SAVE s}
\text{<body of routine>}
\text{ENDPROC}
Here, \text{Li} is the label of the routine’s entry point. For debugging purposes, the length of the routine’s identifier is recorded in the code (this is \text{n} in the code fragment); the characters comprising the name are the elements denoted \text{C1} to \text{Cn}. The instructions in this category are shown in Table 2.3.
The \text{SAVE} instruction specifies the initial setting of the S register. The value of this is the save space size (3) plus the number of formal parameters. The save space is used to hold the previous value of P, the return address and the routine entry address. The first argument to a routine is always at the location denoted by 3 relative to the pointer P (some versions of BCPL have a different save space size, so the standard account is followed above).
The end of each routine is denoted by \text{ENDPROC}. This is a no-op which allows the code generator to keep track of nested procedure definitions.
The BCPL standard requires that arguments are allocated in consecutive locations on the stack. There is no \text{a priori} limit to the number of arguments that can be supplied. A typical call of the form:
\text{E(E1, \ldots, En)}
is compiled as follows (see Table 2.3). First, S is incremented to allocate space for the save space in the new stack frame. The arguments E1 to En are compiled and then the code for E. Finally, either \text{FNAP k} or \text{RTAP k} instruction is generated, the actual one depending upon whether a function or routine call is being compiled. The value k is the distance between the old and new stack frames (i.e., the number of words or bytes between the start of the newly compiled stack frame and the start of the previous one on the stack).
<table>
<thead>
<tr>
<th>Opcode</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>ENTRY</td>
<td>enter routine</td>
</tr>
<tr>
<td>SAVE</td>
<td>save locals</td>
</tr>
<tr>
<td>ENDP</td>
<td>end routine</td>
</tr>
<tr>
<td>FNAP k</td>
<td>apply function</td>
</tr>
<tr>
<td>RNAP k</td>
<td>apply procedure</td>
</tr>
<tr>
<td>RT</td>
<td>return from procedure</td>
</tr>
<tr>
<td>FNRN</td>
<td>return from function</td>
</tr>
</tbody>
</table>
Table 2.3. OCODE instructions for routines.
Return from a routine is performed by the \text{RTRN} instruction. This restores the previous value of P and resumes execution from the return address. If the return is from a function, the \text{FNRN} instruction is planted just after the
result has been evaluated (this is always placed on the top of the stack). The FNRN instruction is identical to RTRN after it has stored the result in the A register ready for the FNAP instruction to store it at the required location in the previous stack frame.
2.5.4 Control Instructions
Control instructions are to be found in most virtual machines. Their function is centred around the transfer of control from one point to another in the code. Included in this set are instructions to create labels in code. The OPCODE control instructions are shown in Figure 2.4.
Table 2.4. OPCODE control instructions.
<table>
<thead>
<tr>
<th>Opcode</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>LAB Ln</td>
<td>declare label</td>
</tr>
<tr>
<td>JUMP Ln</td>
<td>unconditionally jump to label</td>
</tr>
<tr>
<td>JT Ln</td>
<td>jump if top of stack is true</td>
</tr>
<tr>
<td>JF Ln</td>
<td>jump if top of stack is false</td>
</tr>
<tr>
<td>GOTO E</td>
<td>computed goto (see below)</td>
</tr>
<tr>
<td>RES Ln</td>
<td>return</td>
</tr>
<tr>
<td>RSTACK k</td>
<td>return</td>
</tr>
<tr>
<td>SWITCHON n Ld K1 L1 … Lk</td>
<td>jump table for a SWITCHON.</td>
</tr>
<tr>
<td>FINISH</td>
<td>terminate execution</td>
</tr>
</tbody>
</table>
The JUMP Ln instruction transfers control unconditionally to the label L. The instructions JT and JF transfer control to their labels if the top of the stack (implemented as P ! (S-1)) is true or false, respectively. Instructions like this are often found in the instruction sets of virtual machines. The conditional jumps are used, inter alia, in the implementation of selection and iteration commands.
Although they are particular to OPCODE, the other instructions also represent typical operations in a virtual machine. The LAB instruction (really a pseudo-operation) declares its operand as a label (thus associating the address at which it occurs with the label).
The GOTO instruction is used to generate code for SWITCHON commands. It takes the form GOTO E, where E is an expression. In the generated code, the code for E is compiled and immediately followed by the GOTO instruction. At runtime, the expression is evaluated, leaving an address on the top of the stack. The GOTO instruction then transfers control to that address.
The RES and RSTACK instructions are used to compile RESULTIS commands. If the argument to a RESULTIS is immediately returned as the result of a function, the FNRN instruction is selected. In all other contexts, RESULTIS e compiles to the code for e followed by the RES Ln instruction. The execution of this instruction places the result in the A register and then jumps to
the label $L_n$. The label addresses an RSTACK $k$ instruction, which takes the result and stores it at location $P!k$ and sets $S$ to $k+1$.
The OCODE SWITCHON instruction performs a jump based on the value on the top of the stack. It is used to implement switches (SWITCHON commands, otherwise known as case statements). It has the form shown in Table 2.4, where $n$ is the number of cases to which to switch and $L_d$ is the label of the default case. The $K_i$ are the case constants and the $L_i$ are the corresponding code labels.
Finally, the FINISH instruction implements the BCPL FINISH command. It compiles to $\text{stop}(0)$ in code and causes execution to terminate.
2.5.5 Directives
It is intended that BCPL programs be compiled to OCODE (or native code) and then executed in their entirety. The BCPL system is not intended to be incremental or interactive. It is necessary, therefore, for the compiler to provide information to the runtime system that relates to the image file that it is to execute. This is the role of the directives.
The BCPL OCODE machine manages a globals area, a stack and a code segment. The runtime system must be told how much space to allocate to each. It must also be told where globals are to be located and where literal pools start and end, so that modules can be linked. The system also needs to know which symbols are exported from a module and where modules start and end.
The BCPL global vector is a case in point. There is no a priori limit on the size of the global vector. In addition, two modules can assign different values to a particular cell in the global vector (with all the ordering problems that are so familiar).
The OCODE generator also needs to be handed information in the form of directives. The directives in the version of BCPL that is current at the time of writing (Summer, 2004) are as shown in Table 2.5. The directives are used in different parts of the system, so are briefly explained in the following few paragraphs.
<table>
<thead>
<tr>
<th>Directive</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>STACK $s$</td>
<td></td>
</tr>
<tr>
<td>STORE</td>
<td></td>
</tr>
<tr>
<td>ITEMN $n$</td>
<td></td>
</tr>
<tr>
<td>DATALAB $L_n$</td>
<td></td>
</tr>
<tr>
<td>SECTION</td>
<td></td>
</tr>
<tr>
<td>NEEDS</td>
<td></td>
</tr>
<tr>
<td>GLOBAL $n$ $K_1 L_1 \ldots K_n L_n$</td>
<td></td>
</tr>
</tbody>
</table>
The **STACK** directive informs the code generator of the current size of the stack. This is required because the size of the current stack frame can be affected by some control structures, for example those that leave a block in which local variables have been declared.
The **STORE** directive informs the code generator that the point separating the declarations and code in a block has been reached. Any values left on the stack are to be treated as variable initialisations and should be stored in the appropriate places.
Static variables and tables are allocated in the program code area using the **ITEMN** directive. The parameter to this directive is the initial value of the cell that is reserved by this directive. For a table, the elements are allocated by consecutive **ITEMN** directives. The **DATALAB** directive is used to associate a label with a data area reserved by one or more **ITEMN** directives.
The **SECTION** and **NEEDS** directives are direct translations of the **SECTION** and **NEEDS** source directives. The latter are used to indicate the start of a BCPL module and the modules upon which the current one depends.
An OCODE module is terminated with the **GLOBAL** directive. The arguments denote the number of items in the global initialisation list and each of the $K_i$ are offsets into the global vector and $L_n$ is the label of the corresponding offset (i.e., $K_iL_i$ denotes an offset and the label to be associated with that offset).
Directives are an important class of virtual machine instruction, although little more will be said about them. One reason for this is that, once one becomes aware of their need, there is little else to be said. A second reason is that, although every system is different, there are things that are common to all—in this case, the general nature of directives. It is considered that the directives required by any virtual machine will become clear during its specification.
### 2.6 The Intcode/Cintcode Machine
The Intcode/Cintcode machine is used to bootstrap an OCODE machine on a new processor; it can also serve as a target for the BCPL compiler’s code generator. The code is designed to be as compact as possible. The Cintcode machine was originally designed as a byte-stream interpretive code to run on small 16-bit machines such as the Z80 and 6502 running under CP/M. More recently, it has been extended to run on 32-bit machines, most notably machines running Linux.
The best descriptions of the Intcode and Cintcode machines are [45] and [44], respectively. Compared with OCODE, (Ci/I)ntcode is an extremely compact representation but is somewhat more complex. The complexity arises because of the desire to make the instruction set as compact as possible; this is reflected in the organisation which is based on bit fields. The organisation of the machine is, on the other hand, easily described. The following description
is of the original Intcode machine and follows that in [45] (the account in [44]
is far more detailed but is essentially the same in intent).
The Intcode machine is composed of the following components. A memory
consisting of equal-sized locations that can be addressed by consecutive
integers (a vector of words, for example). It has a number of central registers:
A, B: the accumulator and auxiliary accumulator;
C: the control register. This is the instruction pointer; it points to the next
instruction to be executed;
D: the address register, used to store the effective address of an instruction;
P: a pointer that is used to address the current stack frame;
G: a pointer used to access the global vector.
Note that the Intcode machine has a framed stack and a global vector (both
necessary to implement OCODE).
Instructions come in two lengths: single and double length. The compiler
determines when a double-length instruction should be used.
The operations provided by the Intcode machine are shown in Table 2.6
(the idea is taken from [45], p. 134; the specification has been re-written
using mostly C conventions). As in the OCODE instructions, each operation
is specified by a code fragment.
<table>
<thead>
<tr>
<th>Operation</th>
<th>Mnemonic</th>
<th>Specification</th>
</tr>
</thead>
<tbody>
<tr>
<td>Load</td>
<td>L</td>
<td>B := A; A := D</td>
</tr>
<tr>
<td>Store</td>
<td>S</td>
<td>*D := A</td>
</tr>
<tr>
<td>Add</td>
<td>A</td>
<td>A := A + D</td>
</tr>
<tr>
<td>Jump</td>
<td>J</td>
<td>C := D</td>
</tr>
<tr>
<td>Jump if true</td>
<td>T</td>
<td>IF A THEN C := D</td>
</tr>
<tr>
<td>Jump if false</td>
<td>F</td>
<td>IF NOT A THEN C := D</td>
</tr>
<tr>
<td>Call routine</td>
<td>K</td>
<td>D := P + D; *(D+1) := C; P := D; C := A</td>
</tr>
<tr>
<td>Execute operation</td>
<td>X</td>
<td>Various operations, mostly arithmetic of logical operations operating on A and B.</td>
</tr>
</tbody>
</table>
Each Intcode instruction is composed of six fields. They are as follows:
• Function Part: This is a three-bit field. It specifies one of the eight possible
machine operations described in Table 2.6.
• Address Field: This field holds a positive integer. It represents the initial
value of the D register.
• D bit: This is a single bit. When set, it specifies that the initial value of
the D register is to be taken from the following word.
• P bit: This is single bit. It specifies whether the P register is to be added to the D register at the second stage of an address calculation.
• G bit: This is another single bit field. It specifies whether the G register is to be added to the D register at the end of the third stage of address calculation.
• I bit: This is the *indirection* bit. If it is set, it specifies that the D register is to be replaced by the contents of the location addressed by the D register at the last stage of address calculation.
The effective address is evaluated in the same way for every instruction and is not dependent upon the way in which the machine function is specified.
Intcode is intended to be a compact representation of a program. It is also intended to be easy to implement, thus promoting BCPL’s portability (the BCPL assembler and interpreter for Intcode occupies just under eight and a half pages of BCPL code in [45]).
The Intcode machine also uses indirection (as evidenced by the three-stage address calculation involving addresses in registers), thus making code compact.
This has, of necessity, been only a taster for the Intcode and Cintcode machines. The interested reader is recommended to consult [44] and [45] for more information. The full BCPL distribution contains the source code of the OCODE and Cintcode machines; time spent reading them will be rewarding.
Virtual Machines
Craig, I.D.
2006, XV, 269 p. 43 illus., Hardcover
|
{"Source-Url": "http://www.springer.com/cda/content/document/cda_downloaddocument/9781852339692-c1.pdf?SGWID=0-0-45-155607-p45795340", "len_cl100k_base": 9613, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 38773, "total-output-tokens": 9690, "length": "2e13", "weborganizer": {"__label__adult": 0.00029587745666503906, "__label__art_design": 0.00023508071899414065, "__label__crime_law": 0.00019538402557373047, "__label__education_jobs": 0.0003044605255126953, "__label__entertainment": 4.6372413635253906e-05, "__label__fashion_beauty": 0.00011414289474487303, "__label__finance_business": 0.00014579296112060547, "__label__food_dining": 0.00029969215393066406, "__label__games": 0.0004608631134033203, "__label__hardware": 0.0013971328735351562, "__label__health": 0.0002510547637939453, "__label__history": 0.00017881393432617188, "__label__home_hobbies": 0.00010120868682861328, "__label__industrial": 0.0004086494445800781, "__label__literature": 0.0001685619354248047, "__label__politics": 0.00016510486602783203, "__label__religion": 0.0003707408905029297, "__label__science_tech": 0.0089569091796875, "__label__social_life": 4.774332046508789e-05, "__label__software": 0.0044097900390625, "__label__software_dev": 0.98046875, "__label__sports_fitness": 0.00024390220642089844, "__label__transportation": 0.00044846534729003906, "__label__travel": 0.00016772747039794922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38941, 0.01653]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38941, 0.5639]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38941, 0.91143]], "google_gemma-3-12b-it_contains_pii": [[0, 2230, false], [2230, 4937, null], [4937, 7575, null], [7575, 9689, null], [9689, 12228, null], [12228, 14865, null], [14865, 16305, null], [16305, 18864, null], [18864, 21578, null], [21578, 24442, null], [24442, 26880, null], [26880, 29496, null], [29496, 32050, null], [32050, 34969, null], [34969, 37467, null], [37467, 38851, null], [38851, 38941, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2230, true], [2230, 4937, null], [4937, 7575, null], [7575, 9689, null], [9689, 12228, null], [12228, 14865, null], [14865, 16305, null], [16305, 18864, null], [18864, 21578, null], [21578, 24442, null], [24442, 26880, null], [26880, 29496, null], [29496, 32050, null], [32050, 34969, null], [34969, 37467, null], [37467, 38851, null], [38851, 38941, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38941, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38941, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38941, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38941, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38941, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38941, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38941, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38941, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38941, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38941, null]], "pdf_page_numbers": [[0, 2230, 1], [2230, 4937, 2], [4937, 7575, 3], [7575, 9689, 4], [9689, 12228, 5], [12228, 14865, 6], [14865, 16305, 7], [16305, 18864, 8], [18864, 21578, 9], [21578, 24442, 10], [24442, 26880, 11], [26880, 29496, 12], [29496, 32050, 13], [32050, 34969, 14], [34969, 37467, 15], [37467, 38851, 16], [38851, 38941, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38941, 0.27703]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
67ef72b76f4dcc3323a78feb74ca705576022e84
|
Abstract
Normal TCP/IP operation is for the routing system to select a best path that remains stable for some time, and for TCP to adjust to the properties of this path to optimize throughput. A multipath TCP would be able to either use capacity on multiple paths, or
dynamically find the best performing path, and therefore reach higher throughput. By adapting to the properties of several paths through the usual congestion control algorithms, a multipath TCP shifts its traffic to less congested paths, leaving more capacity available for traffic that can’t move to another path on more congested paths. And when a path fails, this can be detected and worked around by TCP much more quickly than by waiting for the routing system to repair the failure.
This memo specifies a multipath TCP that is implemented on the sending host only, without requiring modifications on the receiving host.
Table of Contents
1. Introduction .................................................. 3
2. Notational Conventions ........................................ 5
3. Congestion control ............................................. 5
3.1. RTT measurements ........................................ 5
3.2. Fast retransmit .......................................... 6
3.3. Slow retransmit .......................................... 6
3.4. SACK .................................................. 7
3.5. Fairness and TCP friendliness .......................... 8
4. Path selection .................................................. 8
4.1. The multipath IP layer ................................ 9
4.2. The path indication option ............................. 10
4.3. Timestamp integration option ......................... 12
4.4. Path for retransmissions .............................. 12
4.5. ECN .................................................. 13
4.6. Path MTU discovery ................................... 13
5. Flow control and buffer sizes ............................... 14
6. Handling of RSTs .............................................. 14
7. Middlebox considerations ..................................... 14
8. Security considerations ....................................... 15
9. IANA considerations .......................................... 15
10. Acknowledgements ........................................... 15
11. References .................................................. 16
11.1. Normative References ................................. 16
11.2. Informational References ............................. 16
Appendix A. Document and discussion information ............. 17
Appendix B. An implementation strategy ........................ 17
Author’s Address .................................................. 21
1. Introduction
In order to achieve redundancy to protect against failures, network operators generally install more links than the minimum necessary to achieve reachability. So there are often multiple paths between any two given hosts, even when paths not allowed by policy are removed. However, routing protocols usually select a single "best" path. When multiple paths are used at the same time by the routing system, those tend to be parallel links between two routers or paths that are otherwise very similar. As such, a lot of potentially usable network capacity is left unused. A multipath transport protocol would be able to use more of that capacity by sending its data along multiple paths at the same time, or by switching to a path with more available capacity.
As TCP [RFC0793] is used by the vast majority of all networked applications, and TCP is responsible for the vast majority of all data transmitted over the internet, the logical choice would be to make TCP capable of using multiple paths. SCTP already has the ability to use multiple paths through the use of multiple addresses. However, using SCTP in this way requires significant application changes and deployment would be challenging because there is no obvious way for an application to know whether a service is available over SCTP rather than, or in addition to, TCP. In addition, SCTP as defined today [RFC2960] does not accommodate the concurrent use of multiple paths. Additional paths are purely used for backup purposes.
This memo describes a one-ended multipath TCP, which only changes the behavior of the TCP sender, achieving multipath advantages when communicating with unmodified TCP receivers. This means it is not possible to perform path selection by using different destination addresses. However, other mechanisms that are transparent to the receiver are possible. A simple one would be for the sender to send some packets to one router, and other packets to another router. If these routers then make different routing decisions for the destination address in the TCP packets, the packets flow over different paths part of the way. Other mechanisms to achieve the same goal are also possible. However, with a single destination address, paths can’t be completely disjoint.
Using multiple paths at the same time brings up a number of challenges and questions:
- Naive scheduling (such as round robin) of transmissions over the different paths reduces performance of each path to that of the slowest path.
Using multiple paths causes reordering, which triggers the fast retransmit algorithm, causing unnecessary retransmissions and reduced performance.
TCP requires in-order delivery of data to the application, so when losses occur on one path, buffer capacity may run out and data can’t be transmitted on unaffected paths until the lost data has been retransmitted.
Using multiple paths with an instance of regular congestion control on each path for a single TCP session makes that session use network capacity more aggressively than single path sessions, which can be considered "unfair" and increases packet loss.
This memo seeks to address the first two issues by running separate instances of TCP’s congestion control algorithms for the subflows that flow over different paths. Buffer issues are addressed by retransmitting packets before buffer space runs out, even if normal retransmission timers haven’t fired yet. The fairness issue is a topic of ongoing research; this specification simply limits the number of subflows to limit unfairness and increased loss.
The one-ended multipath TCP takes advantage of the fact that TCP congestion control and flow control are performed by the sender. With regard to flow control and congestion control, the role of the receiver is limited to sending back acknowledgments and advertise how much data it is prepared to receive. Hence, it is possible for the sender to utilize different paths and modify the fast retransmit logic as long as the receiver recognizes the packets as belonging to the same session. So a multipath TCP sender can distribute packets over multiple paths as long as this doesn’t require incompatible modifications to the IP or TCP header contents, most notably the addresses. A single-ended multipath TCP session must still be between a single source address and a single destination address, regardless of the path taken by packets.
The subset of the packets belonging to a TCP session flowing over a given path is designated a subflow.
In order to benefit from using multiple paths, it’s necessary for the multipath TCP sender to execute separate TCP congestion control instances for the packets belonging to different subflows. In the case where all packets are subject to the same congestion window, performance over a fast and a slow path will often be poorer than over just the fast path, defeating the purpose of using multiple paths. For instance, in the case of a 10 Mbps and a 100 Mbps path with otherwise identical properties, a simple round robin distribution of the packets and the use of a single congestion window
will limit performance to that of the slowest path multiplied by the number of paths, 20 Mbps in this case.
2. Notational Conventions
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
3. Congestion control
A multipath TCP maintains instances of all congestion control related variables for each subflow. This includes, but is not limited to, the congestion window, the ssthresh, the retransmission timeout (RTO), the user timeout and RTT measurements. However, because TCP requires in-order delivery of data, there must be a single send buffer and a single receive buffer, thus flow control must happen session-wide.
Per-subflow congestion control is performed by recording the path used to transmit each packet. Acknowledgments are then attributed to the subflow the acknowledged packets were sent over and the congestion window and other congestion control variables for the relevant subflow are updated accordingly.
3.1. RTT measurements
Because a multipath TCP sender knows which packet it sent over which path, it can perform per-path round trip time measurements. This only works if return packets are consistently sent over the same path (or a set of paths with the same latency). If the receiver is not multipath-aware, this condition will generally hold: acknowledgments will flow from the receiver to the sender over a single path unless there is a topology change in the routing system or packets that belong to a single session are distributed over different paths by routers, which is rare. To multipath-capable routers on the return path (if any), the non-multipath-aware host appears to select the default path for all of its packets.
However, if, like the sender, the receiver is multipath-aware, then the return path that the receiver chooses to send ACKs over will influence the RTTs seen by the original sender. The situation where the sender is unaware of fact that the receiver selects different return paths with different latencies is suboptimal, even compared to consistently measuring the RTT over the slowest path, as this leads to higher variability in the RTT measurements and therefore a higher
Having the receiver send ACKs over the same path mitigates the problem somewhat; but presumably, if the receiver is also multipath capable and has data to send, it will want to send this data over more than one path. So RTT measurements may inadvertently end up measuring different return paths in that case. A better solution is for the sender to include an indication in packets that allows the receiver to determine through which path the sender sent the packet. This information, along with the path initially chosen for the outgoing packet that is acknowledged, allows TCP to attribute each RTT measurement to a specific path.
Because congestion control happens per path, there must also be a separate retransmission timeout (RTO) value for each path.
3.2. Fast retransmit
Different paths will almost certainly have different RTTs, and even if the average RTT is the same, normal burstiness and differences in packet sizes will make packets routinely arrive through the different paths in a different order than the order in which they were transmitted. Without modifications to the algorithm, this would trigger the fast retransmit algorithm unnecessarily. To avoid this, fast retransmit is executed whenever, for packets belonging to the same subflow, after an unACKed packet or sequence of packets, more than two segments of new data is ACKed with SACK. This means fast retransmit happens per subflow, and reordering between subflows no longer triggers fast retransmit.
3.3. Slow retransmit
In multipath TCP, a per-path RTO is employed to recover from congestion events that fast retransmit can’t handle. Because the missing packets create holes in the data stream, subsequent packets received over other paths must be buffered in the receive buffer. Unless the receive buffer is extremely large, this means the entire session stalls when the receive buffer fills up. This situation persists until the RTO expires for the congested or broken path so the missing packets can be retransmitted. Should the path in question be completely broken, this will then lead to an almost immediate new stall, and the stall/RTO cycles will then continue until the user timeout / R2 timer [RFC1122] for the subflow expires.
This is solved by taking unacknowledged packets transmitted over subflows that are stalled because they have exhausted their congestion window and are now waiting for the RTO to expire, and scheduling retransmissions of those packets over other paths before
the RTO of the stalled subflow expires. This should be done such that the missing packet arrives before it becomes necessary to stop sending data altogether because the receiver advertises a zero receive buffer. Such retransmissions therefore happen as the receive buffer space advertised by the receiver reaches RTT * MSS for the path that will be used for the retransmission; presumably the path with the lowest RTT. In essence, this creates a second level of fast retransmit that acts across subflows in addition to the normal fast retransmit that happens per subflow. This mechanism is named "slow retransmit".
In the case of single path TCP, scheduling retransmissions before the RTO expires could be problematic because this would be more aggressive than standard (New)Reno congestion control. But in the case of multipath TCP, the retransmission can happen over one of the other paths, which is still progressing.
By scheduling a retransmission faster than an RTO, there is an increased risk that a packet that was still working its way through the network is retransmitted unnecessarily. However, the alternative is allowing the progress of the session to stall (on all paths), reducing throughput significantly.
3.4. SACK
When packets (belonging to different subflows) arrive out of order, the receiver can’t acknowledge the receipt of the out of order packets using TCP’s normal cumulative acknowledgment. However, the [RFC2018] (also see [RFC1072]) Selective Acknowledgment (SACK) mechanism is widely implemented. SACK makes it possible for a receiver to indicate that three or four additional ranges of data were received in addition to what is acknowledged using a normal cumulative ACK. When packets are sent over multiple paths and arrive out of order, the information in the SACK returned by the receiver can tell the sender how each subflow is progressing, so per-subflow congestion control can progress smoothly and unnecessary retransmissions are largely avoided.
One-ended multipath TCP requires the use of SACK to be able to determine which subflows are progressing even if other subflows are stalled, and thus the normal TCP ACK isn’t progressing. If the remote host doesn’t indicate the SACK capability during the three-way handshake, a multipath TCP implementation SHOULD limit itself to using only a single subflow and thus disabling multipath processing for the session in question.
3.5. Fairness and TCP friendliness
One of the goals of multipath TCP is increased performance over regular TCP. However, it would be harmful to realize this benefit by taking more than a "fair" share of the available bandwidth. One choice would be to make each subflow execute normal NewReno congestion control on each subflow, so that each individual subflow competes with other TCPs on the same footing as a regular TCP session. If all subflows use non-overlapping physical paths, other TCPs are no worse off than in the situation where the multipath TCP were a regular TCP sharing their path, so this could be considered fair even though the multipath TCP increases its bandwidth in direct relationship to the number of subflows used. Note that in this case, although multipath TCP sends at the same rate as regular TCP on a given path, resource pooling [wischik08pooling] benefits are still realized because a given transmission completes faster so it uses up resources for a shorter amount of time.
But if several logical paths share a physical path, multipath TCP takes a larger share of the bandwidth on that path. This would only be acceptable as fair for a very small number of subflows. The other end of the spectrum would be for multipath TCP to conform to exactly the same congestion window increase and decrease envelope that a regular TCP exhibits, being no more aggressive than a regular single path TCP session. At this point in time we will assume that fairness is a tunable factor of the regular NewReno AIMD envelope. A simple way to limit the amount of additional aggressiveness exhibited by multipath TCP is a limit on the number of subflows. Until more analysis has been performed and/or there is more experience with multipath TCP, a multipath TCP implementation SHOULD limit itself to using no more than 3 subflows concurrently.
4. Path selection
Note that in order to gain multipath benefits, the multipath TCP layer must be able to determine the logical path followed by each packet so it can measure path properties and perform per-path congestion control. In order to limit the number of packets flowing over each path to the amount allowed by the per path congestion window, the multipath TCP layer must be able to specify over which path a given packet is transmitted.
The situation where routers distribute packets over different paths based on their own criteria makes it impossible for hosts to send less traffic over congested paths and more traffic over uncongested paths and is therefore incompatible with multipath TCP. When routers distribute traffic belonging to the same flow (or, in the case of
Internet-Draft One-ended multipath TCP May 2009
multipath TCP: subflow) over different paths this will also cause
reordering and the associated performance impact on TCP.
4.1. The multipath IP layer
The one-ended multipath TCP is logically layered on a multipath IP
layer, which is able to deliver packets to the same destination
address through one or more logical paths, where the set of n logical
paths share between one and m physical paths. In some cases, the
multipath IP layer will be able to determine that a logical path
isn’t working, or maps to the same physical path as a previous
logical path. For example, if the multipath TCP indicates that a
packet should be sent over the third path, and the multipath IP is
set up to use different next hop addresses for path selection, but
only two next hop addresses are available, the multipath IP layer can
provide feedback to the multipath TCP layer. In other cases, packets
simply won’t be delivered, or will be delivered through the same
physical path used by other logical paths. This may for instance
happen when multipath TCP selects path 1 and multipath IP puts a path
selector with value "1" in the packet, but there are no multipath
capable routers between the source and destination, so all packets,
regardless of the presence and/or value of a path selector, are
routed over the same physical path.
It is up to the multipath TCP layer to handle each of these
situations.
For the purposes of this multipath TCP specification, the simplest
possible interface to the multipath IP layer is assumed. When TCP
segments traveling down the stack from the TCP layer to the IP layer
aren’t accompanied by a path selector value, or the path selector
value is zero, the IP layer delivers packets in the same way as for
unmodified TCP and other existing transport protocols, i.e., over the
default path. Segments may also be accompanied by a path selector
value higher than zero, which indicates the desired path. If the
desired logical path is available, or may be available, the multipath
IP layer attempts to deliver the packet using that logical path. If
the desired logical path is known to be unavailable, the multipath IP
layer drops the segment.
It is assumed that paths as seen by the multipath IP layer are mapped
to logical paths with increasing numbers roughly ordered in order of
decreasing assumed performance or availability. I.e., if path x
doesn’t work or has low performance, that doesn’t necessarily mean
that path x+1 doesn’t work or has low performance, but if if paths x,
x+1 and x+2 don’t work or have low performance, then it’s highly
likely that paths x+3 and beyond also don’t work or have even lower
performance. Routers may have good next hop or even intra-domain
link weight information and link congestion information, but they generally don’t have information about the end-to-end path properties, so the ordering of paths from high to low availability/performance must be considered little more than a hint.
The multipath IP layer may be implemented through a variety of mechanisms, including but not limited to:
- Using different outgoing interfaces on the host
- Directing packets towards different next hop routers
- Integration with shim6 [I-D.ietf-shim6-proto] so that packets can use different address pairs
- Manipulation of fields used in ECMP [RFC2992] (i.e., a different flow label)
- Type of service routing (such as [RFC4915])
- Different lower layer encapsulation, such as MPLS
- Tunneling through overlays
- Source routing
- An explicit path selector field in packets, acted upon by routers
At this time, no choice is made between these different mechanisms.
4.2. The path indication option
Note that several of the fields discussed below are defined with future developments in mind, they are not necessarily immediately useful.
In order to allow for accurate RTT measurements and to inform the IP layer of the selected path, a TCP option indicating the desired path is included in all segments that don’t use the default path. The format of this option is as follows:
```
+-----------------+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| KIND=TBA | LENGTH = 3 |D| MP |R| SP |
```
The length is 3.
D is the "discard eligibility" flag (1 bit). It is similar, but not
identical, to the frame relay discard eligibility bit or the ATM cell loss priority bit. Set to zero, no special behavior is requested. Set to one, this indicates that loss of the packet will be inconsequential. This allows routers to drop packets with D=1 more readily than other packets under congested conditions, and also to completely block packets with D=1 on links that are considered long-term congested or expensive, even if there is no momentary congestion.
Setting the D bit to 1 for some subflows (presumably, ones with a performance lower than the best performing subflow) allows multipath TCP to give way to regular TCP and other single path traffic on congested or expensive paths. As long as the multipath TCP sets D to 0 on the subflow with the best performance, multipath TCP should still perform better than regular TCP, but the reduction in bandwidth use on the other paths helps achieve resource pooling benefits.
MP is a path selector that may be interpreted by multiple routers along the way (3 bits). A value of 0 is the default path that is also taken by packets that don’t contain a multipath option. Multipath TCP aware routers should take this value into account when performing ECMP [RFC2992]. Packets with any value for MP MUST be forwarded, even if the number of available paths is smaller than the value in MP.
R (1 bit) is reserved for future use. MUST be set to zero on transmission and ignored on reception.
SP is a path selector that is interpreted only once by the local TCP stack or a router close to the sender (3 bits). A value of 0 is the default path that is also taken by packets that don’t contain a multipath option. If the value in SP points to a path that isn’t available, the packet SHOULD be silently dropped. This behavior, as opposed to selecting an alternate path out of the available ones, helps avoid the use of duplicate paths. As such, a router may only interpret SP rather than MP when it is known that the router is the only one acting on SP. All other routers may only act on MP.
It is not expected that routers will make routing decisions directly based on the path indication option, as this option occurs deep inside the packet and not in a fixed place. However, a multipath IP layer or a middlebox may write a path selection value into a field in packets that is easily accessible to routers. But conceptually, the routers act upon the values in SP and MP.
The initial packets for each TCP session MUST use D, MP and SP values of zero. If D, MP and SP are all zero, then the path selector option isn’t included in the packet. This makes sure that single path
operation remains possible even if packets with the path selector option are filtered in the network or rejected by the receiver. The packets that are part of the TCP three-way handshake SHOULD be sent over the default path, in which case they don’t contain the path selector option; hence the ability to do multipath TCP isn’t indicated to the correspondent at the beginning of the session as is usual for most other TCP extensions.
4.3. Timestamp integration option
As an optimization, hosts MAY borrow the four bits used by the path selector option from the timestamp option, and thus save one byte of option space, which means the path selector option can replace the padding necessary when the timestamp option is used and not increase header overhead. In that case, the combined path selector and timestamp options MUST appear as follows:
```
+--------------------------------------+
| KIND=TBA | LENGTH = 2 | KIND=8 | LENGTH = 10 |
+--------------------------------------+
| D | MP | TS Value (TSval) |
+--------------------------------------+
| TS Echo Reply (TSecr) |
+--------------------------------------+
```
D and MP are the same as in the three-byte form of the path selector option. R and SP do not occur in this form of the path selector option and are assumed to be zero.
TSval is the locally generated timestamp. Because the timestamp is reduced to 28 bits, the minimum clock frequency is increased from the 59 nanoseconds mandated by [RFC1323] to 1 microsecond so the timestamp wraps in no less than 255 seconds.
TSecr is the timestamp echoed back to the other side (32 bits).
All hosts conforming to this specification MUST be able to recognize the integrated path selector and timestamp options, but they are not required to generate them.
4.4. Path for retransmissions
A multipath TCP implementation MUST be capable of scheduling retransmissions over a path different from the path used to transmit the packet originally. This includes packets subject to fast retransmit.
4.5. ECN
Explicit Congestion Notification works by routers setting a congestion indication in the IP header of packets rather than dropping those packets when they experience congestion. The receiver echos this information back to the sender which then performs congestion control in exactly the same way as if a packet was lost. The ECN specification ([RFC3168]) is such that the receiver sets the ECN-Echo (ECE) flag in the TCP header for all subsequent packets that it sends back until the sender sets the Congestion Window Reduced (CWR) flag. As the ECE flag is set in multiple ACKs, there is no obvious way to correlate the ECN indication in an ACK with a specific packet that experienced congestion, and subsequently, the path that is congested.
At this time, a multipath TCP conforming to this specification SHOULD NOT use ECN. ECN MAY be negotiated, but when more than a single path is used at a given time, packets SHOULD be sent with the ECN field set to Not-ECN (00), and incoming non-zero ECE flags SHOULD NOT be acted upon with regard to congestion control.
4.6. Path MTU discovery
Path MTU discovery ([RFC1191]) is performed for TCP by having TCP reduce its packet sizes whenever "packet too big but DF set" ICMP messages are received. As the name suggests, the path MTU is dependent on the path used, so multipath TCP must maintain MTU information for each path, and adjust this information for each path individually based on the too big messages that it receives.
The time between probing with a larger than previously discovered MTU must either be randomized or explicitly coordinated to avoid probing larger MTUs for multiple subflows at the same time, as probing larger MTUs is likely to lead to a lost packet, and having losses on multiple paths at the same time would be suboptimal. For instance, rather than probe every $t$, in the case of 2 paths, after $t*0.5$ the first path is probed, after $t$ the second and after $t*1.5$ the first is probed again.
Both the IPv4 and IPv6 versions of ICMP return enough of the original packet in a "packet too big" message to be able to recover the sequence number from the original packet, which makes it possible to correlate the too big message with the packet that caused it, and thus the path used to transmit the packet.
5. Flow control and buffer sizes
In order to accommodate the increased number of packets in flight, the send buffer must be increased in direct relationship with the number of paths being used. Alternatively, the number of paths used concurrently should be limited to send buffer / avgRTT.
Although under normal operation, the receive buffer doesn’t fill up, there are two reasons the receive buffer must be the same size as the send buffer: it must be able to accommodate a round trip time plus two segments worth of data during fast retransmit, and the advertised receive window limits the amount of data the sender will transmit before waiting for acknowledgments. So in practice, the receive buffer limits the maximum size of the send buffer, and therefore, the number of paths that can be supported concurrently.
There is no simple rule of thumb to determine the number of paths that should be used, as the maximum number of paths that the receive window can accommodate depends both on the maximum receive window advertised by the receiver and by the RTTs on the paths.
6. Handling of RSTs
If an RST is received after enabling a new path, this could be a reaction to the presence of an unknown option. So the optimal situation would be for an RST to reset just the path used to send the packet that generated the RST, not the entire session. Only when the last path or the default path (on which packets don’t include special options) receives an RST, the entire session should be reset.
7. Middlebox considerations
NATs are designed to be transparent to TCP. Because one-ended multipath TCP conforms to normal TCP semantics on the wire, multipath TCP should in principle also be compatible with NAT. However, if different paths are served by different NATs that apply different translations, the receiver won’t be able to determine that the different subflows through the different paths belong to the same TCP session. So for NAT to work, the translation must either happen in a location that all paths flow through, or the different NATs on the different paths must act as a single, distributed NAT and apply the same translation to the different subflows.
Middleboxes that only see traffic flowing over a subset of the paths used will see large numbers of gaps in the sequence number space. They may also not observe only a partial three-way handshake, or not
observe any ACKs. As such, like with NATs, middleboxes that enforce conformance to known TCP behavior, must be placed such that they observe all subflows. For middleboxes that just check whether packets fall inside the TCP window, it may be sufficient for multipath TCP senders to make sure that all paths see at least one packet per window. Middleboxes that enforce sequence number integrity will almost certainly also block TCP packets for which they didn’t observe the three way handshake. A possible way to accommodate that behavior would be to send copies of all session establishment and tear down packets over all paths that the sender may use. However, this strategy is still likely to fail unless the receiver does the same so the middleboxes may observe the signaling packets flowing in both directions.
It’s also possible that middleboxes (or perhaps hosts themselves) reject packets with the path indicator TCP option. Since packets flowing over the default path don’t carry the path indicator option, these packets should always be allowed through, so single path operation is always possible. When a multipath TCP sender starts to send packets over alternative paths, those packets won’t make it to the receiver because they contain the path indicator option. The result is that a new subflow, which would use a congestion window of two maximum segment sizes, would send two packets and then experiences a retransmission timeout. Slow retransmit makes sure the packets are transmitted before the session stalls, so the impact of the lost packets is negligible.
8. Security considerations
None at this time.
9. IANA considerations
IANA is requested to provide a TCP option kind number for the path indication option.
10. Acknowledgements
The single ended multipath TCP was developed together with Marcelo Bagnulo and Arturo Azcorra.
Members of the Trilogy project, especially Costin Raiciu, have contributed valuable insights.
Iljitsch van Beijnum is supported by Trilogy
11. References
11.1. Normative References
11.2. Informational References
Appendix A. Document and discussion information
The latest version of this document will always be available at http://www.muada.com/drafts/. Please direct questions and comments to the multipathtcp@ietf.org mailinglist or directly to the author.
Appendix B. An implementation strategy
In order to perform per-path congestion control, all of the ACK-based events that trigger congestion control responses as well as all the variables used by the congestion control algorithms must be recreated in the multipath situation. These are the triggers and variables for the four mechanisms in RFC 2581.
1. the path MTU (page 4)
2. the arrival of an ACK that acknowledges new data (page 4)
3. the arrival of a non-duplicate ACK (page 4) or the sum of new data acknowledged (page 5)
4. triggering of the retransmission timer (page 5)
5. the flightsize or number of bytes sent but not acknowledged (page 5)
6. the retransmission of a segment (page 5)
7. the arrival of a third or subsequent duplicate ACK (page 6, page 7)
8. whether a retransmission timeout period has elapsed since the last reception of an ACK (page 7)
1, 4, 6 and 8 are maintained session-wide.
We recreate these events and variables based on SACK information in the one-sequence number multipath TCP case as follows.
We keep track of every packet sent. (Alternatively: multi-packet contiguous blocks of data transmitted over the same path.) When an ACK comes in, we first remove the stored information about packets/data blocks that are cumulatively ACKed, noting how much data was ACKed for each path that the packets were sent over. Then we do the same for all the SACK blocks in the ACK. Because we remove the information about (S)ACKed data and you can remove something just once, we don’t have to keep track of previous SACKs like the current BSD implementation does.
The only slightly tricky part is emulating duplicate ACKs. This may not even be really necessary, as the SACKs give us better information to base fast retransmit on, but that’s something for another day.
What happens in the pseudo code is that when traversing the list of sent packets (this happens in order of seqnum), we note the path that packets that aren’t SACKed are sent over. When we’re done processing SACK data and it turns out that for a path there are one or more packets that we skipped over when processing SACK data and there was also data SACKed after a skipped packet, there was a lost (or reordered) packet on this path. When the amount of "duplicate ACKed" data grows beyond two segment sizes, we’ve reached the equivalent of three duplicate ACKs so we trigger fast retransmit (7).
We update the congestion window (2 and 3) when there was data (S)ACKed for a path. ACKs that don’t acknowledge any data for a path aren’t relevant because we don’t need them to trigger fast retransmit and we assume that they’re sent to (S)ACK data for other paths, anyway. (Or they could be window updates.)
We maintain the flightsize (5) by simply adding data bytes as packets are transmitted and subtracting when they’re (S)ACKed. Because we have explicit SACKs, we don’t need to guess based on duplicate ACKs. The flightsize is also adjusted when we perform fast retransmit or a regular retransmission over a path other than which was used for the original packet. In addition, we explicitly mark some packets to trigger once-per-RTT actions when they’re ACKed.
Pseudo code for the above:
// initializing data structures is left as an exercise for the reader
// transmitting packets
// assume we've selected a path to transmit over
path.flightsize = path.flightsize + packet.datasize
packet.path = path
packet.status.acked = false
// set up state to remember to do per RTT stuff when packet is ACKed
if path.do_per_rtt_next_packet == true
path.per_rtt_seqnum = packet.seqnum.first
packet.per_rtt = true
path.do_per_rtt_next_packet = false
else
packet.status.per_rtt = false
// don't set ECN on outgoing packets for now, can add logic
// for deciding which packets to ECN enable later
packet.ecn.sent = 0
// add to linked list of sent packets (to handle retrans-
// missions, linked list must maintain seqnum order, not FIFO
// or LIFO)
llpush(packet)
// receiving (S)ACKs
// normal flow-wide flow control actions based on cumACK
// also happen (elsewhere)
// handle ECN, must detect transitions rather than
// depend on actual value
if packet.ecnecho == true
if ecn.previous == true
ecn.current = false
else
ecn.current = true
ecn.previous = true
else
ecn.previous = false
// initialize some stuff before we handle the ACK
for each path
path.do_per_rtt = false
path.ackedbytes = 0
path.unacked.sure = 0
path.unacked.maybe = 0
path.ecn.received = false
// remove cumulatively ACKed packets
llwalk_init
packet = llwalk_next
while packet.seqnum.first < ack.cumulative
// ECN, we only act if we enabled ECN when we sent the packet
if ecn.current & packet.ecn.sent <> 0
path.ecn.received = true
// if part of a packet is ACKed, we need some trickery
if packet.seqnum.last_plus_one > ack.cumulative
path.ackedbytes += ack.cumulative - packet.seqnum.first
packet.seqnum.first = ack.cumulative
else
path.ackedbytes = path.ackedbytes + packet.datasize
if packet.per_rtt & packet.seqnum.first == path.per_rtt_seqnum
path.do_per_rtt = true
llremove(packet)
packet = llwalk_next
// now we handle the SACKs (assume exactly one SACKblock for
// simplicity) we continue walking the linked list, no need to
// restart
while packet.seqnum.first < ack.sack.last_plus_one
if packet.seqnum.last_plus_one < ack.sack.first
// these packets overlap with the SACK block
// for simplicity, assume packets are always completely
// SACKed in reality we need to split a packet if only the
// middle is SACKed ECN, we only act if we enabled ECN when
// we sent the packet
if ecn.current & packet.ecn.sent <> 0
path.ecn.received = true
path.ackedbytes = path.ackedbytes + packet.datasize
if packet.per_rtt & packet.seqnum.first == path.per_rtt_seqnum
path.do_per_rtt = true
else
// add potentially unacknowledged bytes to for sure unacknowledged bytes
// because we now know we had a SACK hole if any
path.unacked.maybe = path.unacked.maybe + packet.datasize
path.unacked.ack = path.unacked.ack + path.unacked.maybe
path.unacked.maybe = 0
llremove(packet)
else
// note how many bytes we skipped unSACKed
// if later data is SACKed, that’s our version of a dup ACK
path.unacked.maybe = path.unacked.maybe + packet.datasize
packet = llwalk_next
// done processing, now tally up the the results
foreach path
// update flightsize (item 5 in CC events/variables list)
path.flightsize = path.flightsize - path.ackedbytes
// if any data was ACKed
if path.ackedbytes <> 0
// some stuff was ACKed for this path
if path.unacked.sure > 2 * path.mss
// more than 2 * MSS worth of data in SACK hole = fast
// retransmit execute fast retransmit (item 7 in CC
// events/variables list) need to handle flightsize in
// some way here ignore ECN because we already have a loss
// send back ECN window update indication, though
else
// SACKs were cumulative for this path
// execute cwnd update (items 2 and 3 in CC events/
// variables list)
// ECN must be taken into account here
// and send back ECN window update indication
if path.do_per_rtt
// execute per RTT actions
// indicate that this should be set for next packet sent
path.do_per_rtt_next_packet == true
Note that the pseudo-code doesn’t cover all the mechanisms explained earlier. Also, ECN is handled here because it’s not too difficult to do. The hard part is deciding which packets to enable ECN for.
Author’s Address
Iljitsch van Beijnum
IMDEA Networks
Avda. del Mar Mediterraneo, 22
Leganes, Madrid 28918
Spain
Email: iljitsch@muada.com
|
{"Source-Url": "https://tools.ietf.org/pdf/draft-van-beijnum-1e-mp-tcp-00.pdf", "len_cl100k_base": 8879, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 42536, "total-output-tokens": 10306, "length": "2e13", "weborganizer": {"__label__adult": 0.0003542900085449219, "__label__art_design": 0.0003407001495361328, "__label__crime_law": 0.0005173683166503906, "__label__education_jobs": 0.0007500648498535156, "__label__entertainment": 0.0002551078796386719, "__label__fashion_beauty": 0.00016582012176513672, "__label__finance_business": 0.0005707740783691406, "__label__food_dining": 0.00037026405334472656, "__label__games": 0.0009670257568359376, "__label__hardware": 0.006744384765625, "__label__health": 0.0005998611450195312, "__label__history": 0.0005617141723632812, "__label__home_hobbies": 9.298324584960938e-05, "__label__industrial": 0.00079345703125, "__label__literature": 0.00038051605224609375, "__label__politics": 0.0004608631134033203, "__label__religion": 0.0004782676696777344, "__label__science_tech": 0.449951171875, "__label__social_life": 0.00010335445404052734, "__label__software": 0.083251953125, "__label__software_dev": 0.449951171875, "__label__sports_fitness": 0.0005292892456054688, "__label__transportation": 0.0014581680297851562, "__label__travel": 0.0003666877746582031}, "weborganizer_max": "__label__science_tech", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42857, 0.01911]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42857, 0.63709]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42857, 0.8992]], "google_gemma-3-12b-it_contains_pii": [[0, 269, false], [269, 2752, null], [2752, 5258, null], [5258, 7860, null], [7860, 10137, null], [10137, 12618, null], [12618, 15032, null], [15032, 17673, null], [17673, 20460, null], [20460, 22002, null], [22002, 24630, null], [24630, 26634, null], [26634, 28929, null], [28929, 31307, null], [31307, 33301, null], [33301, 34656, null], [34656, 35672, null], [35672, 38094, null], [38094, 39431, null], [39431, 41604, null], [41604, 42857, null]], "google_gemma-3-12b-it_is_public_document": [[0, 269, true], [269, 2752, null], [2752, 5258, null], [5258, 7860, null], [7860, 10137, null], [10137, 12618, null], [12618, 15032, null], [15032, 17673, null], [17673, 20460, null], [20460, 22002, null], [22002, 24630, null], [24630, 26634, null], [26634, 28929, null], [28929, 31307, null], [31307, 33301, null], [33301, 34656, null], [34656, 35672, null], [35672, 38094, null], [38094, 39431, null], [39431, 41604, null], [41604, 42857, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42857, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42857, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42857, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42857, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42857, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42857, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42857, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42857, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42857, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42857, null]], "pdf_page_numbers": [[0, 269, 1], [269, 2752, 2], [2752, 5258, 3], [5258, 7860, 4], [7860, 10137, 5], [10137, 12618, 6], [12618, 15032, 7], [15032, 17673, 8], [17673, 20460, 9], [20460, 22002, 10], [22002, 24630, 11], [24630, 26634, 12], [26634, 28929, 13], [28929, 31307, 14], [31307, 33301, 15], [33301, 34656, 16], [34656, 35672, 17], [35672, 38094, 18], [38094, 39431, 19], [39431, 41604, 20], [41604, 42857, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42857, 0.01183]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
3163822e41fca812dfc88512fd5df527279c72b4
|
Magic is Relevant
Inderpal Singh Mumick*
Stanford University
Hamid Pirahesh
IBM Almaden Research Center
Sheldon J. Finkelstein†
IBM Almaden Research Center
Raghu Ramakrishnan‡
University of Wisconsin at Madison
Any sufficiently advanced technology is indistinguishable from magic
— Arthur C. Clarke, in "Profiles of the Future"
Abstract
We define the magic-sets transformation for traditional relational systems (with duplicates, aggregation and grouping), as well as for relational systems extended with recursion. We compare the magic-sets rewriting to traditional optimization techniques for nonrecursive queries, and use performance experiments to argue that the magic-sets transformation is often a better optimization technique.
1 Introduction
"Magic-sets" is the name of a query transformation algorithm ([BMSU86]) (and now a class of algorithms — Generalized Magic-sets of [BR87], Magic Templates of [Ram88], Magic Conditions of [MFPR90]) for processing recursive queries written in Datalog. Previously, these algorithms had not been deployed in standard relational database systems, and their value for such systems had not been assessed.
*Part of this work was done at the IBM Almaden Research Center. Work at Stanford was supported by an NSF grant IRI-87-22686, an Air Force grant AFOSR-88-0266, and a grant of IBM Corporation.
†Author's current affiliation: Tandem Computers.
‡Part of this work was done while the author was visiting IBM Almaden Research Center. Work at Wisconsin was supported by an IBM Faculty Development Award and an NSF grant IRI-88-04319.
Relational database systems support a number of features beyond those in Datalog, including duplicates (which lead to multisets), aggregation and grouping. We extended the magic-sets approach (and the Datalog language) to handle these features ([MPRSO]), and also showed that magic-sets can be extended to propagate conditions other than equality ([MFPR90]).
This paper synthesizes, extends, and applies those results, our goal is to demonstrate that magic-sets is a robust technique which can profitably be incorporated in practical relational systems not just for processing recursive queries (e.g., bill-of-materials) but also for nonrecursive queries. The technique is particularly valuable for complex queries such as decision-support queries.
The paper is organized as follows. We present a brief subsection describing SBSQL, the SQL language of Starburst that supports recursion, and give a realistic example of a non-linear query. Section 2 motivates the practicality of the magic-sets technique by describing its relationship to traditional transformations (such as predicate pushdown) for nonrecursive complex SQL queries. Section 3 defines the magic-sets transformation and some related concepts, as published elsewhere. Section 4 describes our extension of the magic-sets transformation for relational database systems, resolving complications arising when our previous results [MFPR90, MPR90] are combined and applied to SBSQL. We show that recursions introduced by the magic-sets transformation of nonrecursive queries can be avoided, and that combining the adornment phase with the magic-sets transformation allows us to propagate arbitrary conditions using a simple adornment pattern. An overview of the implementation of magic-sets in the Starburst extensible relational database prototype at IBM Almaden Research Center ([IIFLP89]) is presented in Section 5. Section 6 gives DB2 performance measurements demonstrating that magic-sets can improve the performance of complex nonrecursive SQL queries over traditional techniques such as correlation and decorrelation. Section 7 presents conclusions.
11 Starburst SQL (SBSQL)
The SQL language in Starburst (SBSQL) has been extended to include recursion, user-defined functions, and abstract data types. SBSQL supports a number of operations on tables, including SELECT, GROUPBY, and UNION. The SELECT operation performs a join and selection on the input tables and outputs a set of expressions on columns of qualified tuples. A sophisticated user of Starburst (the Database Customizer) may define new operations (e.g., outer join) on tables, so the query-rewrite phase of Starburst needs to be adaptable to language extensions.
Definition 1.1 Table Expressions A texp (table expression) in SBSQL is an expression defining a named derived table that can be used anywhere in the query in place of a base table. A texp includes a head and a body. The head of a texp specifies its output table (name, attribute names). The body of a texp is an SBSQL query specifying how the output table is computed.
As an example, consider the following query that determines the employee number and salary of senior programmers along with the average salary and head count of their departments.
Example 1.1 (Table Expression)
(Q) SELECT Eno, Sal, Avgsal, Empcount FROM emp, dlnfo(Dno, Avgsal, Empcount) AS (SELECT Dno, AVG(Sal), COUNT(*) FROM emp GROUPBY Dno) WHERE Job = "SI Programmer" AND emp Dno = dlnfo Dno
In this paper, we will sometimes refer to an SBSQL query as a program.
2 Relationship of Magic-sets to Traditional Optimizations
This section gives an informal description of magic-sets and its relationship to more traditional transformation techniques, and compares our work with previous results. Section 3 defines the magic-sets transformation formally.
2.1 Predicate Pushdown
Practical relational database systems push selection predicates as far down as possible in the execution tree. Data is filtered so that irrelevant rows are not propagated; in some cases, predicates are applied implicitly via the access path chosen to retrieve data. Consider the following SQL query with emp(Eno, Ename, Sal, Bonus, Job, Dno, EkindsN) and dept(Dno, Mgrno, Location) as the base tables:
(Q) SELECT Ename, Mgrno FROM emp, dept WHERE Job = "SI Programmer" AND Sal + Bonus > 50000 AND emp Dno = dept Dno AND Location = "San Jose" AND P(emp,dept)
P is some complex subquery. A system might use an index on Job to access only senior programmers, immediately apply the predicate on Sal + Bonus, access dept using an index on its Dno column, immediately apply the predicate on Location, and finally evaluate the
subquery \( P \). Using an index is often cost-effective, even though it can be thought of as introducing an extra join (with the index). Indexes access can eliminate retrieval of many irrelevant rows (and hence, perhaps, many irrelevant data pages). Once we have an \( emp \) row, \( emp \) \( Dno \) can be passed down, so that the \( Dno \) predicate can be used for accessing (or filtering) \( dept \).
The predicate on \( Sal + Bonus \) could have been applied after \( dept \) was retrieved, but instead “predicate pushdown” moved it next to the data access. Since evaluation of such predicates is expensive, predicate pushdown (eliminating irrelevant rows as soon as possible), is typically a very good strategy.
The semi-join operator [BC81, RBF+80] takes this idea a step further. If an employee’s department is not in San Jose, that employee is just as irrelevant (for the above query) as if she were a Junior Programmer. By computing \( SJDno \), the \( Dno \)'s of the departments in San Jose, a system can follow a modification of the above execution plan in which employees are filtered (based on \( emp \) \( Dno \) in \( SJDno \)) before the join with the \( dept \) table. As with indexes, an extra operation is introduced (the computation and join with \( SJDno \)). Applying this operation means that the \( dept \) table must be accessed twice, first to compute \( SJDno \), then to access matching departments.
In the original plan above, \( emp \) was accessed first, then the value of \( Dno \) was passed to \( dept \), so that relevant departments were accessed. The semi-join predicate restricting employees to those with \( Dno \) in \( SJDno \) was passed “sideways” in the opposite direction, from the department table. Using information passed sideways is the “magic sets” approach — systematically introducing predicates based on information passed sideways, so that these predicates can be used to filter out irrelevant data as soon as possible.
Unlike the standard predicate pushdown transformation, magic may be applied in many different ways for a particular query, these correspond to the “sips” (sideways information passing strategies) chosen. Sips can be used flexibly, for each join order (sips order), we can pick any set of tables to generate bindings, and pick any subset of the bindings and push them down. This produces magic predicates, which are bindings on certain columns (similar to the semi-join predicate for \( SJDno \) above). The names used for magic tables show how they were created — these names have superscripts indicating restrictions on attributes of the original query’s tables. The superscripts are called “adornments.”
### 2.2 Correlation and Decorrelation
Several authors have studied SQL subqueries ([ISO89, ABC+79]) and described transformations for migrating predicates across them [Kim82, GW87, Day87]. Correlation, like magic, “pushes predicates down” into subqueries. Its inverse, decorrelation, “pulls predicates up” from subqueries. One major difference between magic and these other techniques is that magic applies uniformly to hierarchical (tree-structured) and recursive queries (as well as queries with common subexpressions), these other techniques have been applied only to hierarchical queries. Performance comparisons of these techniques appear in Section 6.
**EXAMPLE 2.1 (Correlation, Decorrelation and Magic)**
(C) \[
\text{SELECT Ename FROM emp} \\
\text{WHERE Job = "Sr Programmer" AND} \\
\text{Sal > (SELECT AVG(Sal) FROM emp e2) WHERE e2 Dno = e1 Dno)}
\]
**Query (C) selects senior programmers who make more than the average salaries in their departments.** As written, it involves correlation for each employee who is a senior programmer, the average salary in her department is calculated, and the employee is selected if her salary is more. No irrelevant information is computed, but the average salary for a department might be calculated many times (if several employees were in the same department). In addition, access to \( e1 \) and \( e2 \) must be done in a specific order (\( e1 \), then \( e2 \)). Finally, processing for \( e2 \) is row-at-a-time rather than set-oriented, and set-orientation tends to be a major performance advantage of the relational model. Thus correlation diminishes the non-procedural and set-oriented advantages of the relational model, and may also perform redundant computations.
Query (C) can be transformed into the decorrelated query (which uses a temporary table, \( dep-avgsal(Dno, Sal) \), defined within the query)
(D1) \[
\text{SELECT Ename FROM emp, dep-avgsal} \\
\text{WHERE Job = "Sr Programmer" AND Sal > ASal AND emp Dno = dep-avgsal Dno}
\]
(D2) \[
\text{dep-avgsal(Dno, Sal) AS} \\
\text{(SELECT Dno, AVG(Sal) FROM emp GROUPBY Dno)}
\]
Unlike the correlated query, the decorrelated query is set-oriented (Average salaries are computed for all the departments in one operation, rather than computing the average for one department at a time as an employee in the department is selected.) It is also non-procedural (The two scans of employee can be switched around.) An execution plan might access employees by \( Dno \), calculate the average salary for that \( Dno \), forming a tuple of \( dep-avgsal \), and then find all senior programmers in that \( Dno \) with a higher salary. But decorrelation also has a substantial disadvantage: average salary is determined for all departments, whether or not they have senior programmers. If there are many departments and only a few have senior programmers, the cost of the irrelevant computation will be substantial.
The magic-sets approach combines the advantages of correlation and decorrelation though at a cost. After transformation the magic query \( S \) is
(S1) \[
\text{SELECT Ename FROM s-mag, mag-avgsal} \\
\text{WHERE Sal > ASal AND s-mag Dno = mag-avgsal Dno}
\]
1 Correlation can also exist in recursive queries, but that is beyond the scope of this paper [PF89].
2 Correlation might be implemented so that departmental salaries are stored in a temporary table. This has its own costs.
3 In this paper a program consisting of statements \( X_1 \) to \( X_n \) is referred to as program \( X \).
(S2) mag_avgsal(Dno, Asal) AS
(SELECT Dno, AVG(Sal) FROM mag, emp
WHERE mag Dno = emp Dno GROUP BY Dno)
(S3) mag(Dno) AS
(SELECT DISTINCT Dno FROM s_mag)
(S4) s_mag(Ename, Dno, Sal) AS
(SELECT Ename, Dno, Sal FROM emp
WHERE Job = "Sr Programmer")
One possible execution plan for S selects employees who are senior programmers (s_mag), determines which departments have at least one of these employees in them (mag), computes the average salary only for those departments (mag_avgsal), and then selects each senior programmer (s_mag) who makes more than her departmental average. The accesses can be ordered in other ways, just as they could be for the decorated query, and the operations are set-oriented. Moreover, no irrelevant data is touched, since the average is computed only for departments that have senior programmers. However, magic comes at the cost of computing extra tables, s_mag and mag.
The magic query S is similar to the semi-join example given in Section 2.1 Information about relevant bindings (departments that have senior programmers) is passed "sideways" from emp to mag_avgsal.
2.3 Previous Work
Kim [Kim82] originally studied the question of when quantified subqueries could be replaced by joins (or anti-joins) Ganski and Wong [GW87] and Dayal [Day87] did additional work on both eliminating nested subqueries and making correlated subqueries more efficient. These papers recognize that correlated subqueries can be very inefficient because they are not set-oriented. They eliminate correlation by introducing additional relational operators, including outer join [GW87] and generalized aggregation [Day87]. Their transformations can be applied to SQL queries written in a specific form, where the user either pushes down join predicates from the query to the subquery or writes a predicate referring to tables in both the subquery and the query. They use these predicates, which we think of as sideways predicates, to generate bindings in the query.
In contrast, our Extended Magic Sets (EMS) technique takes queries without user-specified correlation and determines which sideways predicates should be pushed down. This is an advance, since users might miss some opportunities for predicate pushdown, and some cannot even be specified syntactically. However, if users specify correlated predicates, it is desirable to make them set-oriented. For this, we rely on techniques similar to the ones presented in [GW87] and [Day87]. Hence we view Ganski's and Dayal's work as complementary to our work. Ganski's paper illustrates the complexity of query-rewrite, since it emends some previous transformations. This complexity supports our unified structured approach, in which we systematically transform an algebraically general class (that includes recursion) of queries.
3 Definitions
Magic-sets Transformation The Magic Sets algorithm rewrites a query so that the fixpoint evaluation of the transformed query generates no irrelevant tuples. The idea is to compute a set of auxiliary tables that contain the bindings used to restrict a table. The table expressions in the query are then modified by joining the auxiliary tables that act as filters and prevent the generation of irrelevant tuples. As a first step, however, we produce an adorned query in which tables are adorned with an annotation that indicates which arguments are bound to constants, which are restricted by conditions, and which are free, in the table expression using the table. For each table, we have an adorned version that corresponds to all uses of that table with a binding pattern that is described by the adornment, different adorned versions are essentially treated as different tables (and possibly solved differently). For example, p^b and p^f are treated as (names of) distinct tables. An adornment for an n-ary table is defined to be a string of b's, c's and f's. Argument positions that are treated as free (have no predicate on them) are designated as f, and positions that are bound to a finite set of given values (by equality predicates) are designated as b. Argument positions that are restricted in the goal by some non-equality predicate (condition), are designated as c.
The magic-sets transformations of [BMSU86, BR87] propagate bindings (equality predicates) in Datalog, using b and f adornments. Conditions are ignored. [MPRS90] extends the magic-sets transformation to propagate bindings in programs with duplicates and aggregation. The extension to conditions ([MPRS90]) needs to be adapted to work in presence of duplicates, and we present the idea in Section 4.1. We ignore c adornments and conditions in the following definition.
The Magic-Sets algorithm can be understood as a two-step transformation in which we first obtain an adorned query Pad and then apply the following transformation:
We construct a new query P^m4 Initially, P^m4 is empty.
1. Create a new DISTINCT table m.p for each table p in Pad. The arity is the number of bound arguments of p.
2. For each table expression in Pad, add a modified version to P^m4. If table expression t has head, say, p(t) (t is shorthand for all the attributes of
p), the modified version is obtained by joining the table \(m.p(t^b)\) into the body \((m.p\) denotes the magic table of \(p\), and \(t^b\) denotes the arguments of \(p\) that are bound.)
3. For each table expression \(r\) in \(P\) with head, say, \(p(t)\), and for each table \(q(t_q)\) referenced in its body, add a magic table expression \(t^b\). The head is \(m.q(t^b)\). The body contains all tables that precede \(q\) in the views (defined below) associated with \(r\), and the magic table \(m.p(t^b)\).
4. Create a seed tuple \(m.q(\bar{c})\) from the equality predicates in the outermost query block, where \(\bar{c}\) is the set of constants equated to the bound arguments of \(q\).
Note that there is a magic table associated with each table in \(P\). If several table expressions with the same head are generated, they are replaced with a single table expression in which the body is the union of the bodies.
Intuitively, magic-sets transformation involves adding magic tables to the FROM clause and equipping predicates to the WHERE clause of each SQL statement.
**EXAMPLE 3.1 (Magic-sets Transformation)** Consider the query \(D\) of Example 2.1. We need to evaluate the average salary of a department in the view dep-avgsal if, and only if, the department has a senior programmer, as otherwise the average salary is not relevant. Magic-sets achieves this optimization by defining a magic table \((M3)\), and rewriting \(D1\) and \(D2\) as \(M1\) and \(M2\).
\[
\text{(M1)} \quad \text{SELECT Ename FROM emp WHERE Job = "Sr Programmer" AND Sal > Asal AND emp.Dno = dep-avgsal.Dno}
\]
\[
\text{(M2)} \quad \text{dep-avgsal.Dno, Asal) AS}
\]
\[
\text{(SELECT Dno, AVG(Sal) FROM m-dep-avgsal, emp WHERE m-dep-avgsal.Dno = emp.Dno GROUPBY Dno)}
\]
\[
\text{(M3)} \quad \text{m-dep-avgsal.Dno) AS}
\]
\[
\text{(SELECT DISTINCT Dno FROM emp WHERE Job = "Sr Programmer")}
\]
Supplementary Magic-sets In the magic query \(M\) of Example 3.1, the predicate Job = "Sr Programmer" is repeated in statements \(M1\) and \(M3\). The program \(S\) in Example 2.1 stores the result of the selection in \(s\_mag\), and uses it as a common subexpression when evaluating \(S1\) and \(S3\). \(s\_mag\) is called the supplementary magic-set. Program \(D\) is transformed into program \(S\) using the supplementary magic-sets transformation ([BR87]). We use supplementary magic-sets in Section 6 because the performance advantage of using common subexpressions is important. For ease of exposition, we use magic-sets in other sections.
**SIPS** A Sideways Information Passing Strategy is a decision as to how to pass information sideways in the body of a table expression while evaluating the table expression. The information passed comes from the predicates in the table expression. [BR87, MFPR90] define SIPS formally.
A sips can be full, meaning that all eligible predicates are used as soon as possible, or partial. A full sips can be defined by an ordering on the tables in the FROM clause. We refer to this order as the sips order.
The magic-sets transformation passes information sideways between tables being joined, according to a given sips order. In this paper, we assume that tables are listed in the FROM clause in the sips order.
**Dependency Graphs** Dependency graphs are commonly used to detect recursions. In a table expression, the tables in the body (the FROM clause) are used to define the table in the head. If table \(q\) defines table \(r\) in some table expression, we denote this by \(q \rightarrow r\), which is called a dependency edge. We define \(\rightarrow\) to be the transitive closure of \(\rightarrow\). A query is recursive if its dependency graph has cycles, that is, if there exists a table \(q\) such that, \(q \rightarrow q\). All tables in a strongly connected component (scc) of the dependency graph are said to be mutually recursive.
**4 The Extended Magic-sets Transformation**
The magic-sets transformation defined in Section 3 is applicable to relational systems with duplicates and aggregation. The definition borrows results from [MPR90], where semantics of duplicates and aggregation in presence of recursion is defined, and the use of aggregation is limited to the classes of monotonic and magical stratified programs, which are closed under the magic-sets transformation.
The magic-sets transformation was long believed to be useful only for propagating bindings (equality predicates). Our recent paper, [MFPR90], addresses the extension of the technique to propagating conditions (non-equality predicates) in Datalog programs, using a ground magic-sets transformation (GMT). In Section 4.1 we extend GMT to work in the presence of duplicates.
Further, we discuss how the magic-sets technique may be useful in purely nonrecursive systems (Section 4.2), and we present a one-phase algorithm for adomining and magic transforming a query that lets us push arbitrary conditions using just the \(b, c, f\) adomiments (Section 4.3).
**4.1 Pushing Conditions using Magic**
The ground magic-sets transformation for propagation of conditions, as presented in [MFPR90], does not preserve duplicate semantics. We consider a simple example.
251
EXAMPLE 4.1 Consider the following program $P$. $p_1$ and $p_2$ are arbitrary built-in predicates (conditions), and $u, v, s, t$ are EDB relations.
$(P_1)$: $t(X, Z) \text{ AS } (\text{SELECT } x, z \text{ FROM } p_1(x) \text{ AND } tcr(x, Y) \text{ AND } u & z) \text{ UNION } (\text{SELECT } X, z \text{ FROM } p_2(x) \text{ AND } t(X, Y) \text{ AND } v(Y, Z))$.
$(P_2)$: $t^{bf}(X, Z) \text{ AS } (\text{SELECT } X, z \text{ FROM } s(X, Y) \text{ AND } w(Y, Z))$.
Let $s = [(1, 2), (1, 2), (1, 2)], w = [(2, 3)], v = [(3, 4)],$ and $u = [(1, 2)]$. Let $p_1(1)$ and $p_2(1)$ be true. The duplicate semantics of $P$ defines $t$ to be the multiset $[(1, 4), (1, 4), (1, 4)]$. GMT transforms the definition $P_2$ into:
$(M_2)$: $t^{bf}(X, Z) \text{ AS } (\text{SELECT } X, z \text{ FROM } m(X) \text{ AND } s(X, Y) \text{ AND } w(Y, Z))$.
$(M_3)$: $m(X, Y) \text{ AS } (\text{SELECT } X, Y \text{ FROM } p_1(X) \text{ AND } s(X, Y)) \text{ UNION } (\text{SELECT } X, Z \text{ FROM } p_2(X) \text{ AND } s(X, Y))$.
$P_1$ is copied into $M_1$ to complete the magic program. The view $m$ has six copies of the tuple $(1,2)$, consequently the view $r$ has six copies of $(1,4)$. As a result programs $P$ and $M$ are not duplicate equivalent. Simply defining $m$ to be a DISTINCT table does not help us, for then $m$ will have one copy of $(1,2)$, and $r$ will have one copy of $(1,4)$.
As an aside, if either $r$ or $t$ was a DISTINCT table, GMT would preserve the query semantics.
GMT constructs customized magic-sets $(m)$, known as supplementary magic-sets, for each SELECT clause by combining the magic-sets $(p_1(X))$ with a table in the FROM clause. Preservation of duplicate semantics requires us to eliminate overlapping magic tuples (X values common to $p_1$ and $p_2$), while retaining duplicates in tables copied from FROM clause (s). Such an operation is not possible if the magic tables are never constructed, as is the case in GMT.
A straightforward solution is to construct the magic-sets explicitly, writing $M_2$ and $M_3$ as:
$(E_2)$: $t^{bf}(X, Z) \text{ AS } (\text{SELECT } X, z \text{ FROM } m(X) \text{ AND } s(X, Y) \text{ AND } w(Y, Z))$.
$(E_3)$: $m(X) \text{ AS } (\text{SELECT } X, Y \text{ FROM } p_1(X) \text{ AND } s(X, Y)) \text{ UNION } (\text{SELECT } X, Z \text{ FROM } p_2(X) \text{ AND } s(X, Y))$.
$m$ is the magic-set. Some joins are repeated in the above construction, such as the join with $s$. In the Starburst implementation, we have a solution that lets us use the supplementary transformation; we omit the description due to lack of space.
4.2 Magic-sets Transformation for Nonrecursive Programs
It is well-known that the magic-sets transformation has the undesirable property of merging sets. Consequently a nonrecursive program can become recursive.
EXAMPLE 4.2 (Recursion due to Magic): In the program $P$,
$(P_1)$: $A, B \text{ FROM } r(A, C), q(C, B) \text{ WHERE } A = 10$.
$(P_2)$: $r(X, C) \text{ AS } (\text{SELECT } X, C \text{ FROM } q(A, D), t(D, C))$.
$(P_3)$: $q(E, F) \text{ AS } (\text{SELECT } E, F \text{ FROM } m.q^{bf}(E), s(E, F))$.
$q$ is used twice, once in $(P_1)$, and once in $(P_2)$, with a $bf$ adornment at both places. $q^{bf}$ gets bindings from $r^{bf}(P_1)$, and from $m.q^{bf}(P_2)$. Its magic-sets is thus a Union. The magic-query is
$(M_1)$: $A, B \text{ FROM } r^{bf}(A, C), q^{bf}(C, B) \text{ WHERE } A = 10$.
$(M_2)$: $r^{bf}(A, C) \text{ AS } (\text{SELECT } A, C \text{ FROM } m.q^{bf}(A), q^{bf}(A, D), t(D, C))$.
$(M_3)$: $q^{bf}(E, F) \text{ AS } (\text{SELECT } E, F \text{ FROM } m.q^{bf}(E), s(E, F))$.
$(M_4)$: $m.q^{bf}(10)$.
$(M_5)$: $m.q^{bf}(A) \text{ AS } (\text{SELECT } C \text{ FROM } m.q^{bf}(A, C) \text{ WHERE } A = 10) \text{ UNION DISTINCT } (\text{SELECT } A \text{ FROM } m.q^{bf}(A))$.
Query $(M)$ is recursive, as its dependency graph has the cycle $q^{bf}(M_2) \rightarrow r^{bf}(5a) \rightarrow m.q^{bf}(M_3)q^{bf}$.
Many existing DBMS's do not support recursion. Usability of the EMS in such systems will be severely limited if recursive queries are produced as a result of the magic-sets transformation.
Consider Example 4.2. Table $q^{bf}$ is recursive, but the newly introduced recursion is through the magic table, $m.q^{bf}$ (as it must be for any recursion introduced by the magic transformation). $m.q^{bf}(10)$ is computed from $(5b)$, and leads to tuples in $q^{bf}$ by $(M_3)$. These generate tuples for $r^{bf}$ through $(M_2)$. Tuples in $r^{bf}$ generate new tuples in $m.q^{bf}(5a)$ and thence in $q^{bf}$. But now, the new $q^{bf}$ tuples cannot fire the body of $(M_2)$ to generate new $r^{bf}$ tuples. Thus the recursion does not "feed into itself", and terminates after one loop. The program can therefore be written nonrecursively.
We can avoid the introduction of recursion in the magic program by not recognizing common subexpressions. If we treat the two uses of $q$ in program $P$ as two different tables, $q_1$ and $q_2$, the magic-sets transformation will not introduce recursion, as the reader may verify by performing the magic-sets transformation on a program $P'$ derived from $P$ with $q_1$ and $q_2$ defined according to $P_3$.
We now make precise the intuition underlying the above example.
Proposition 4.1 Given a query $P$, let $M$ be the query obtained by magic transformation of $P$ according to a set of full sips. Then, (A) If $P$ is a tree structured query does not have common subexpressions.
for t" will be evaluated on a tablet
formanng the common subexpressaons 0
generating that adornment
Function 3 Adornments specify when two uses of t
can share the same copy of t as a common subexpression
The bcf adornment pattern introduced in [MFPR90]
uses the c adornment for independent conditions only
A condition on an attribute X is said to be independent
if it can be expressed without reference to any free (f) attribute
Thus X > 10 is independent X > Y is independent if Y is bound, otherwise it is dependent
The adornment algorithm and the following GMT
of [MFPR90] work on the assumption that only independent conditions are pushed down, and that no conditions are deduced from the given ones [MFPR90] also suggests that with the two-phase algorithm, it is not possible to capture and push down dependent and more general types of conditions using the bcf adornment pattern. Stronger adornment patterns are needed to push down such conditions
In our one-phase algorithm, we generate magic-sets
for a table t as we generate its adornments, before adornng the bodies of the table expression defining t. Later,
query, M is a dag⁰, and (B) if P is a dag, M will have
bounded recursion that can be avoided altogether by not forming the common subexpressions □
4 3 Simple-bcf Adornments
The magic-sets transformations of [BMSU86, BR87,
MFPR90] assume that an adorned program is available
as input. The transformation thus requires two phases. The program is adorned in the first phase, and magic transformed in the second phase. In this subsection, we present a one phase algorithm that does adornments and magic transformation together, and show how it can help in reducing the complexity of adornments.
We view adornments as providing three functions
Function 1 The adornment α on a table t is an abstraction for the restriction on the table t at the point where it is used. This abstraction, α, and not the actual restriction, is used to decide how the table expressions for tα will be evaluated.
Function 2 tα is evaluated in an identical fashion for all restrictions that are abstracted by the adornment α (same sips, sips orders, join orders and adornments for tables referenced in t’s table expressions). Thus if the abstraction is not a good one, tα will be solved less than optimally for some of the restrictions. An adornment should be faithful ([MFPR90]) in that it should allow an optimal evaluation to be chosen for all restrictions (within the class of restrictions the adornment pattern is trying to capture) generating that adornment.
Function 3 Adornments specify when two uses of t
can share the same copy of t as a common subexpression. The motivation behind the requirement that the uses share the same adornment is that the magic-sets generated from the uses be over the same arguments, which permits union.
The bcf adornment pattern introduced in [MFPR90] uses the c adornment for independent conditions only. A condition on an attribute X is said to be independent if it can be expressed without reference to any free (f) attribute. Thus X > 10 is independent. X > Y is independent if Y is bound, otherwise it is dependent. The adornment algorithm and the following GMT of [MFPR90] work on the assumption that only independent conditions are pushed down, and that no conditions are deduced from the given ones. [MFPR90] also suggests that with the two-phase algorithm, it is not possible to capture and push down dependent and more general types of conditions using the bcf adornment pattern. Stronger adornment patterns are needed to push down such conditions.
In our one-phase algorithm, we generate magic-sets for a table t as we generate its adornments, before adornning the bodies of the table expression defining t. Later, when we determine the sips in the table expressions of tα, and adorn the tables referenced, we know the actual conditions on tα (since we can look up the magic-set), rather than just the adornment α. We then use these actual conditions in making a better choice on how to evaluate the table expression.
Define the simple-bcf adornment pattern to be similar to the bcf pattern of [MFPR90] (discussed in Section 4 1), except that α a adornment on an attribute now represents any type of condition on that attribute, not just an independent condition.
We now explain how having the magic-sets available while adornning a table expression for t enables the simple-bcf pattern to fulfill the three functions of adornments given earlier, even though the c adornment represents an arbitrary condition.
In the following lemma we borrow the definition of
grounding tables from [MFPR90]. Given a condition p on a derived table t, a set of tables in the FROM clause of t containing all the attributes referenced in p, is called a grounding set. In statement P2 of Example 4 1, s is the grounding table for the restriction X > 10.
Lemma 4 1. Let p1 and p2 be two restrictions that condition the same attributes of a derived table t. Then, if a set G is a grounding set for p1, G is also a grounding set for p2.
Using Lemma 4 1, we show that the one-phase algorithm with simple-bcf adornments performs all the functions we want adornments to perform.
Function 1 With the actual restrictions on a table available at the time its body is adorned, adornments are no longer needed for Function 1. As a result, the abstraction they represent is not important.
Function 2. Function 2 can be done by the simple-bcf pattern for the class of arbitrary conditions, this follows from Lemma 4 1 and the Ground Magic-sets Transformation [MFPR90]. We illustrate with an example.
Example 4 3 (Simple-bcf Adornment)
(P1) SELECT X, Y, Z FROM t(X, Y, Z)
WHERE X > 10 AND Y > 10
(P2) SELECT X, Y, Z FROM t(X, Y, Z)
WHERE X > Y
(P3) t(X, Y, Z) AS (SELECT X, Y, Z
FROM q1(X), q2(Y), s(X, Z))
Both queries P1 and P2 generate the adornment tα/f.
By Lemma 4 1, {q1, q2} is a grounding set for ccf restriction that conditions the first two arguments of t. q1 and q2 should be adorned differently for the two uses of t'α/f while s should be adorned w/bf for both uses. If we were adornning the program without constructing magic-sets and without using any information besides the ccf adornment on t we could not adorn as desired. However, using our one-phase algorithm, we get
(M1) SELECT X, Y, Z FROM t''/(X, Y, Z)
WHERE X > 10 AND Y > 10
⁰ A dag can have common subexpressions, but it does not have recursion.
We are implementing EMS in Starburst. We have written the pseudo-code, and have C code that executes the transformation in simple cases. In this section we give a sketch of our implementation.
EMS is a part of the query-rewrite phase of the Starburst optimizer. Rewrites are done by a (production) rule-based system that encodes each query transformation as a rewrite rule ([HP88]). A forward chaining engine traverses the query graph depth first (normally), applying rewrite rules. EMS is applied to graph elements representing table expressions, and it is applied to one table expression at a time. Multiple firings of the EMS rule, as the graph is traversed, cumulatively produce a transformed query.
Starburst includes a number of rewrite rules besides the magic-sets rule. The predicate pushdown rules determine what predicates get pushed from table expressions into referenced tables, and in what form. The EMS rule then places the predicates in the right place (as a magic-set).
The way in which magic-sets are applied to a table expression can depend upon the operation in the table expression. For example, magic cannot be applied to operations such as GROUP BY (the bad operations) exactly as it is applied to operations such as SELECT (the good operations).
When magic processing acts on a table expression for table t, previous processing ensures that the head t is adorned, a magic-table mt for t is available, and (for good operations) that mt is grounded and joined into the body of the table expression. Also, the sip's order within the table expression must be known.
During magic processing of t, all predicates in the table expression are pushed into each table referenced in the table expression (using the predicate pushdown rules). An adornment α for each r is determined, and a table expression for r α, with a body identical to that of the table expression for r, is created. The magic-sets m r α for r α from its use in t is formed, and if r is good, m r α is grounded and added to the table expression for r α.
Magic processing is performed for every table expression visited in a traversal of the query graph. We avoid repeatedly processing a table expression except for bad table expressions under special conditions. The following theorem holds for our EMS algorithm (assuming we first get rid of all cycles in query Q consisting entirely of bad tables).
**Theorem 5.1** For any query graph Q, EMS terminates, and EMS(Q) is equivalent to Q under the evaluation strategy of Starburst.
The adornment and the magic-set transformation are combined in a one-phase algorithm (Section 4.3). Mostly, the simple-bcf adornment is used, although bad operations require special refined adornments. EMS is extensible with respect to (1) new operations in table expressions, and (2) the traversal strategy (depth-first, breadth-first, bottom-up, etc.).
### 6 Performance
In this section we present performance measurements that illustrate how EMS accelerates complex queries (such as decision-support queries) consisting of several query blocks. It is not uncommon for such queries to take hours (or even days) to complete. Query transformation can improve performance by several orders of magnitude.
A comprehensive performance evaluation requires a definition of a benchmark database and a set of queries for a particular workload. We focus on a complex query workload (with multiple predicates, joins, aggregations and subqueries), rather than a transaction workload, where queries are relatively simple. Although transaction benchmarks have been proposed, [A+85, TPC89], complex query workloads are still at a preliminary stage ([TOB89, O'N89]). To measure the performance effect of the magic-sets transformation, we employ a scaled up (by a factor of 10) version of the DB2 benchmark database described in [Loc86].
Magic-sets transformations have been studied in the context of recursive queries, and the usefulness of magic-sets for recursive queries is explained in [BR86, BR87]. In this section we study nonrecursive queries.
Our performance measurements were done on the IBM DB2 V2R2 relational DBMS using the DB2PM performance monitoring tool [DB88] to determine elapsed time (total time taken by system to evaluate the query) and I/O time (the time for which I/O devices are busy). We measured the performance of each query both before and after applying the magic-sets transformation. Both representations of the query went through the query compilation process, including cost optimization. Performance figures for several of the queries we measured are described below.
The DB2 benchmark database is based on an inventory tracking and stock control application. Work centers, represented by the wkc table have locations (local). Items (itm) are worked on at locations within work centers, and the table itl captures this relationship. Each item may have orders (ipt). Some physical characteristics of the database are shown in Table 1.
Predicate pushdown and set-oriented computation are the two key factors in query optimization and execution. The magic-sets transformation enables us to take advantage of both. Advantages of pushing down local predicates, such as (Job = "Sl Ploglammel") in query D1 of Example 2.1, are well-known. We concentrate on pushdown of join predicates that pass information sideways (SIPS predicates), such as (emp Dno = dep.avgsal Dno) in query D1 of Example 2.1.
Set-oriented computation is desirable as it usually leads to improved performance over an equivalent fragmented or tuple-at-a-time computation. Pushing join (or sips) predicates by correlation fragments the computation, causing the subquery to be evaluated once for each value passed down. As a result, we may lose the efficiency of sequential prefetch ([TG84]) because each computation fragment does not access enough pages to take full advantage of sequential prefetch in terms of amortizing the cost of an I/O call across a large number of pages. Inefficiency can also arise in accessing data through nonclustered indices. If computation is not fragmented, we extract the TID (tuple ID) of qualified tuples from the index, sort the results by page IDs, and then do the I/Os ([MHWC90]). Hence, each relevant data page is retrieved only once. In a fragmented computation the same page may be retrieved many times, once by each computation fragment that is interested in a tuple on the page. Further, each fragment has a certain fixed cost associated with operations such as opening and closing scans, and sort initializations (e.g., initialization of the tournament trees when tournament sorts are used). In a set-oriented computation, the fixed costs are incurred once only. With correlation the fixed costs are incurred once for each evaluation of the subquery. Query transformations that result in non-set-oriented computation can therefore degrade performance significantly, as we see in Section 6.3.
Evaluation of performance of magic-sets is based on the two factors discussed above: predicate pushdown (or sideways information passing) and set-oriented computation. The effect of predicate pushdown depends on how bindings affect the query plan of (a piece of) a query. For example, the magic-sets transformation may provide bindings for a column, so an index on that column becomes an efficient access path. The effect of set-oriented computation depends on the cardinality of the binding set (with and without duplicates). The higher the cardinality, the greater is the benefit of using set-oriented information passing. There are numerous queries where the above two factors are important. We now present some of the many queries we used in our experiments.
### 6.1 Experiment 1
In this experiment, selective bindings are passed to a subquery. The collection of bindings does not contain duplicates. The experiment uses the view V1 which, for each item and work center, computes the average time spent¹⁰ on that item in locations within the work center.
\[
(V1) \text{ vitetime(itm, wkcen, avgtime) AS} \\
\quad \text{ (SELECT } \text{ itm, wkcen, AVG(loctime) FROM itl GROUPBY } \text{ itm, wkcen) }
\]
Consider the query Q1. For items ordered with a quantity (qcomp) of 450, find the average time spent on that item in locations within each work center that work on the item.
\[
(Q1) \text{ SELECT DISTINCT itm, wkcen, avgtime} \\
\quad \text{ FROM } \text{ itp, itm, vitetime} \\
\quad \text{ WHERE } \text{ itp qcomp = 450 AND itp itemn = itm itemn} \\
\quad \text{ AND itp itemn = vitetime itemn}
\]
¹⁰ A location works on an item for loctime = finishtime - starttime.
The following plan to solve Q1 “Compute the view \textit{view mag time}, store it in a temporary table, and use it to compute Q1”, took about 150 minutes to execute. The plan is inefficient since the view is computed for all items even though the query needs the view on a small subset of the items (the predicate on quantity is very selective). We can avoid the redundant computation by passing into the view (through query rewrite) a set of bindings on items for which the view needs to be computed.
The bindings can be passed by either correlation or magic-sets. With correlation, the predicate (\textit{tp itemn = view itemn itemm}) is pushed into the table expression \textit{corr.itemtime}, filtering out computation of many groups. The correlated query\footnote{The view becomes a correlated table expression. Standard SQL does not allow correlated table expressions. We did the experiment using a variant of C1 whose execution cost is close, but definitely less, than the execution cost of C1.} is
\begin{verbatim}
(C1) SELECT DISTINCT itemn, wkcen, avgtlme
FROM ttm, corr.itemtime(tp itemn, wkcen avgtlme) AS
(SELECT itemn, wkcen, AVG(loctime) FROM ttm
WHERE ttm itemn = ttm itemn
GROUP BY wkcen)
WHERE qcomp = 450 AND ttm itemn = ttm itemn
\end{verbatim}
The plan for C1 evaluates the view \textit{corr.itemtime} multiple times. During each evaluation, the index on \textit{itemn} column of the \textit{ttm} table is used, and only the relevant tuples are retrieved. The predicate on \textit{tp itemn} is such that there are no duplicates in the bindings (\textit{tp itemn}) passed into the view.
With the magic-sets transformation, the supplementary magic-set, \textit{s.mag}, is computed as a temporary (\textit{M1a}), \textit{s.mag} is used in computing a reduced view \textit{mag.itemtime} (\textit{M1b}), and the original query is rewritten using the reduced view (\textit{M1c})
\begin{verbatim}
(M1a) s.mag AS
(SELECT DISTINCT itemn FROM ttm, ttm
WHERE qcomp = 450 AND ttm itemn = ttm itemn)
(M1b) mag.itemtime(itemn, wkcen, avgtlme) AS
(SELECT ttm itemn, ttm wkcen, AVG(loctime)
FROM s.mag, ttm WHERE s.mag itemn = ttm itemn
GROUP BY ttm itemn, ttm wkcen)
(M1c) SELECT DISTINCT s.mag itemn, wkcen, avgtlme
FROM s.mag, mag.itemtime
WHERE s.mag itemn = mag.itemtime itemn
\end{verbatim}
The plan to solve \textit{M1} computes the view \textit{mag.itemtime} by a nested-loop join, with \textit{s.mag} (a small table) as the outer and \textit{ttm} (a large table) as the inner, using the index on \textit{itemn} column of \textit{ttm} to access only the relevant \textit{ttm} tuples. The join is followed by grouping and aggregation.
Performance results are summarized in Table 2. For each query, we give the elapsed and I/O times. The figures are normalized with respect to a value 100 for the original query. Both correlation and magic-sets improved performance by extit{two orders of magnitude}, reducing the elapsed time from 2.5 hours to about 1 minute. Neither technique was significantly better than the other, since both led to very similar plans for computing the \textit{view itemtime}, which was the expensive part of query Q1. With correlation, the bindings on \textit{itemn} were directly used to access \textit{ttm} through an index. With magic-sets, a nested loop join retrieved the set of bindings from \textit{s.mag} and used them to access \textit{ttm} in exactly the same way. Correlation was marginally faster because the magic query \textit{M1} needed to store the supplementary magic-set in a temporary table. The correlated query had much lower I/O time. Amongst the reasons are (a) The variant of C1 we use in our experiment had a much smaller output, (b) temporaries need to be stored while evaluating \textit{M1}.
\subsection{6.2 Experiment 2}
This experiment examines the effect of duplicate values in the set of bindings on performance. Experiment 1 is modified by changing the predicate on \textit{tp} so that it gives us 95 items, each with 100 orders. As a result there are 100 copies of each distinct binding value (\textit{itemn}). Performance results are summarized in Table 2.
Correlation computes the view \textit{corr.itemtime} for every copy of every binding value coming from the outer query. Magic-sets do significantly better because it eliminates duplicate bindings before storing them in \textit{s.mag}. Correlation can be improved so as to eliminate duplicate view evaluations. The result of each evaluation, along with the binding value used in the evaluation, can be saved in a temporary table, and duplicate evaluations replaced by a table lookup. We believe that such a modification will make correlation competitive with magic-sets on Experiment 2.
\subsection{6.3 Experiment 3}
This experiment shows the advantage of set-oriented information passing using magic-sets. There are no duplicates in the binding set. Consider the view \textit{V3}. For each workcenter, find the average times spent on items by locations of a certain type in this workcenter. \textit{V3} is similar to \textit{V1}, except that we filter out some locations, and project out the \textit{itemn} column from the output.
\begin{verbatim}
(V3) itemavgtlme(wkcen, avgtlme) AS
(SELECT DISTINCT wkcen, AVG(loctime) FROM ttm
\end{verbatim}
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Query & Experiment 1 & & Experiment 2 & & \\
& Time & I/O & Time & I/O & \\
\hline
Original & 100 00 & 100 00 & 100 00 & 100 00 & \\
Correlated & 0 40 & 0 06 & 2 10 & 0 005 & \\
Magic & 0 46 & 0 25 & 0 28 & 0 069 & \\
\hline
\end{tabular}
\caption{Table 2 Relative Elapsed and I/O Times for Queries of Experiments 1 and 2}
\end{table}
Table 3 Relative Elapsed and I/O Times for Queries of Experiment 3
<table>
<thead>
<tr>
<th>Query</th>
<th>10 bindings</th>
<th></th>
<th>100 bindings</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Time</td>
<td>I/O</td>
<td>Time</td>
<td>I/O</td>
</tr>
<tr>
<td>Original</td>
<td>100 00</td>
<td>100 00</td>
<td>100 00</td>
<td>100 00</td>
</tr>
<tr>
<td>Correlated</td>
<td>513 00</td>
<td>453 00</td>
<td>513 00</td>
<td>452 00</td>
</tr>
<tr>
<td>Magic</td>
<td>55 00</td>
<td>46 00</td>
<td>111 00</td>
<td>62 20</td>
</tr>
</tbody>
</table>
Table 4 Relative Elapsed and I/O Times for a Variation of Experiment 3
<table>
<thead>
<tr>
<th>Query</th>
<th>Time</th>
<th>I/O</th>
</tr>
</thead>
<tbody>
<tr>
<td>Original</td>
<td>100 00</td>
<td>100 00</td>
</tr>
<tr>
<td>Correlated</td>
<td>52 50</td>
<td>22 74</td>
</tr>
<tr>
<td>Magic</td>
<td>8 60</td>
<td>5 17</td>
</tr>
</tbody>
</table>
The indexed access to all tuples of Itl that satisfy the location predicate Itl is a large table, even when limited to a few locations, and the access cost is substantial. As most of the access costs are repeated for each binding value, the cost of the query is almost linear in the cardinality of the binding set. In the corresponding magic query, Itl is accessed only once and joined with the full set of bindings. The modification to correlation suggested in Subsection 6.2 cannot improve performance of correlation on this experiment.
Experiment 3 shows the stability of the magic-sets transformation — even when it turns out not to be the optimal choice (because predicate selectivity estimates are wrong), it tends not to be much worse than the winning alternative. Since the primary goal of optimization is to avoid bad plans (and the secondary goal is to find a pretty good one), the magic-sets transformation often meets optimization goals better than correlation and decorrelation, which are considerably less stable. Unstable query transformations require the optimizer to estimate the cost of queries carefully. Due to the extremely high cost of the optimization process, the role of stable heuristics is becoming increasingly important. For this reason, the stability of magic-sets is very valuable.
Table 4 summarizes the performance results of Experiment 4, a variation on Experiment 3 with 10 bindings. The view is similar to V3, but a join of Itl with Itp and another table is needed before grouping. As a result, the grouped relation is large, and the grouping cost is significant. Magic-sets perform better than both correlation and decorrelation (due to set-oriented computation) and the original query (due to reduction in cost of grouping), and is a clear winner.
7 Conclusion
In this paper, we showed that the magic-sets transformation can be extended to handle general SQL constructs. We sketched the implementation of Extended Magic-sets as part of the rewrite component of a relational database system prototype, and presented a performance study contrasting magic-sets with correlation and decorrelation. Many significant results were abbreviated or omitted, including aspects of refined adornments, simple adornments and implementation details.
We believe that this paper demonstrates that the magic sets technique (which formerly was a tool only for Datalog and logic programming) should be considered...
a practical extension of existing rewrite optimization techniques. Magic is indeed "relevant" for relational database systems, it is a general technique (applicable to nonrecursive as well as recursive queries) for introducing predicates that filter out accesses to irrelevant rows of tables as soon as possible. Database systems have been using limited variants of this for many years.
We do not suggest that the magic-sets transformations should be employed whenever they are applicable. Rather, magic is a valuable alternative that appears to be more stable than both correlation and decorrelation, subject to trade-offs that must be evaluated by a cost optimizer [SAC+79, Loh88]. Magic may be especially valuable for queries (such as decision-support queries) involving large numbers of joins, complex nesting of query blocks, or recursion. Such queries may be infeasible unless magic-sets are applied.
A number of special optimization techniques have been proposed in the literature. Some of these can be viewed as alternatives to magic-sets that try to exploit special properties of certain queries, such as linear queries on acyclic data (e.g., Henchen-Naqvi [HN86], Counting [BMSU86]), or special operators to express a restricted (and important) class of queries such as transitive closure (e.g., The alpha operator [Agr87]). When applicable, the above techniques are sometimes better than the magic-sets transformation. However, Example 12 illustrates that there are useful queries that cannot be expressed using linear recursion. The importance of magic-sets is that it is applicable to all (extended) SQL queries and provides a general optimization framework with good, stable performance. There are also techniques that further refine the magic-sets approach by recognizing special properties of the program and optimizing the transformed program suitably.
Although we are implementing magic as an extension of the rewrite optimization component in the Starburst extendable relational database prototype, many practical questions remain. Our difficult open problem is the integration of rewrite optimization and cost optimization. Cost optimization may take time and space exponential in the number of tables joined. Transformations such as magic-sets may introduce exponentially many alternative queries, each of which requires cost optimization of a query more complex than the original. Clearly there is a structural relationship among the many query transformations, but we do not understand this problem well enough yet to reduce it to a manageable level by either algebraic techniques or by engineering heuristics.
8 Acknowledgements
The Starburst project at IBM Almaden Research Center provided a stimulating environment for this work. We particularly thank Ted Messinger for helping us in setting up the DB2 benchmark database. Bruce Lindsay, Guy Lohman, Ulf Schreier, Jeff Ullman and the referees made helpful comments on drafts of this paper. We thank Jeff Ullman and members of the NALLI group at Stanford for many insightful discussions on this subject.
References
[ACSN] P Agretti and S Cosmadakis Expressiveness of Restricted Recursive Queries In STOC 1989
[Agr87] R Agrawal, Alpha: An Extension of Relational Algebra to Express a Class of Recursive Queries. Paper Engineering 1987
[HLP84] L. Henchen and S. Naqvi. On Composing Queries in Recursive First Order Databases. JACM, 31(1) 47-80, January 1984
[SSU87] L. H. Hulchen and S. Naqvi, On Compiling Queries in Recursive First Order Databases. JACM, 31(1) 47-80, January 1984
|
{"Source-Url": "http://www.iai.uni-bonn.de/III/lehre/AG/IntelligenteDatenbanken/Seminar/WS04/Literatur/%5BMFPR90b%5D.pdf", "len_cl100k_base": 13493, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 44682, "total-output-tokens": 15702, "length": "2e13", "weborganizer": {"__label__adult": 0.0002989768981933594, "__label__art_design": 0.0003275871276855469, "__label__crime_law": 0.00037384033203125, "__label__education_jobs": 0.0017070770263671875, "__label__entertainment": 8.910894393920898e-05, "__label__fashion_beauty": 0.0001538991928100586, "__label__finance_business": 0.0006966590881347656, "__label__food_dining": 0.00035190582275390625, "__label__games": 0.000522613525390625, "__label__hardware": 0.0010662078857421875, "__label__health": 0.0005512237548828125, "__label__history": 0.0003254413604736328, "__label__home_hobbies": 0.00011742115020751952, "__label__industrial": 0.000579833984375, "__label__literature": 0.0003237724304199219, "__label__politics": 0.00025010108947753906, "__label__religion": 0.0004019737243652344, "__label__science_tech": 0.10198974609375, "__label__social_life": 9.97781753540039e-05, "__label__software": 0.0282135009765625, "__label__software_dev": 0.86083984375, "__label__sports_fitness": 0.00019860267639160156, "__label__transportation": 0.0005269050598144531, "__label__travel": 0.0001982450485229492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59508, 0.03259]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59508, 0.26463]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59508, 0.88957]], "google_gemma-3-12b-it_contains_pii": [[0, 3702, false], [3702, 6257, null], [6257, 12477, null], [12477, 17628, null], [17628, 22827, null], [22827, 28299, null], [28299, 34757, null], [34757, 37983, null], [37983, 43386, null], [43386, 49131, null], [49131, 52352, null], [52352, 59508, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3702, true], [3702, 6257, null], [6257, 12477, null], [12477, 17628, null], [17628, 22827, null], [22827, 28299, null], [28299, 34757, null], [34757, 37983, null], [37983, 43386, null], [43386, 49131, null], [49131, 52352, null], [52352, 59508, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59508, null]], "pdf_page_numbers": [[0, 3702, 1], [3702, 6257, 2], [6257, 12477, 3], [12477, 17628, 4], [17628, 22827, 5], [22827, 28299, 6], [28299, 34757, 7], [34757, 37983, 8], [37983, 43386, 9], [43386, 49131, 10], [49131, 52352, 11], [52352, 59508, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59508, 0.03235]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
5ebb06a4d0d341c9321f85c72fdbfc29e2197b61
|
Cardpliance: PCI DSS Compliance of Android Applications
Samin Yaseer Mahmud and Akhil Acharya, North Carolina State University; Benjamin Andow, IBM T.J. Watson Research Center; William Enck and Bradley Reaves, North Carolina State University
https://www.usenix.org/conference/usenixsecurity20/presentation/mahmud
Cardpliance: PCI DSS Compliance of Android Applications
Samin Yaseer Mahmud,* Akhil Acharya,* Benjamin Andow,† William Enck,* and Bradley Reaves*
*North Carolina State University
†IBM T.J. Watson Research Center
Abstract
Smartphones and their applications have become a predominant way of computing, and it is only natural that they have become an important part of financial transaction technology. However, applications asking users to enter credit card numbers have been largely overlooked by prior studies, which frequently report pervasive security and privacy concerns in the general mobile application ecosystem. Such applications are particularly security-sensitive, and they are subject to the Payment Card Industry Data Security Standard (PCI DSS). In this paper, we design a tool called Cardpliance, which bridges the semantics of the graphical user interface with static program analysis to capture relevant requirements from PCI DSS. We use Cardpliance to study 358 popular applications from Google Play that ask the user to enter a credit card number. Overall, we found that 1.67% of the 358 applications are not compliant with PCI DSS, with vulnerabilities including improperly storing credit card numbers and card verification codes. These findings paint a largely positive picture of the state of PCI DSS compliance of popular Android applications.
1 Introduction
Mobile devices have become a primary way for users to access technology, and for many users, it is the only way. The most wide-spread mobile device platforms, namely Android and iOS, are known for their vast application stores providing applications that offer a wide variety of functionality. An important subset of these applications takes payment information from consumers, including those providing entertainment, transportation, and food-related services.
The casual observer might expect that mobile apps offering paid services and goods will always leverage the established and centralized payment platforms provided by the mobile OS (e.g., Google Pay and Apple Pay). These payment platforms provide users a secure and trusted way to manage their payment information (e.g., credit card numbers) without unnecessarily exposing it to third parties. They do so by a) using a virtual token that is linked to the actual credit card, and b) handling both payment information and authorization outside of the third-party application [3]. However, recent work [8] reported that 4,433 of a random sample of 50,162 applications from Google Play were asking the user to enter credit card information via text fields in the application UI. There are many reasons why this may occur. For example, an application developer may wish to offer an alternative if the user does not want to use the Google or Apple payment system. Alternatively, the application developer may wish to avoid overhead charges from Google and Apple [37, 38]. Whatever the reason, the fact remains: applications are asking users to enter credit card information.
The use of payment information makes these applications distinct from the majority of mobile applications. Specifically, the PCI DSS [6] financial industry-standard mandates that software systems protect payment information in specific ways. While it is well known that mobile applications leak privacy-sensitive information [9, 15, 16, 22], fail to verify SSL certificates [17, 18, 20, 24, 29, 36], and misuse cryptographic primitives [14, 25], doing so while processing payment information represents a significant violation.
Our work is motivated by the research question: do mobile applications mishandle payment information? Answering this question introduces several technical research challenges. First, which PCI DSS requirements apply to mobile applications? PCI DSS v3.2.1 (May 2018) is 139 pages and applies to a broad variety of payment systems. Second, how can those requirements be translated into static program analysis tasks? The analysis should avoid false negatives while minimizing false positives. Third, how can the use of credit card information be programatically identified? Distinguishing credit card text values requires understanding the semantics of widgets in the user interface.
In this paper, we design a static program analysis tool called Cardpliance that captures key requirements from PCI DSS that are applicable to mobile applications. Cardpliance combines recent work on static program analysis of Android applications (i.e., Amandroid [19]) and UI semantic inference.
Using the UI semantic inference of UiRef [8], Cardpliance reduces this sample to 358 applications known to ask for credit card information from the user. Cardpliance then identifies 40 applications with potential PCI DSS violations. After manual decompilation and source code review, we confirmed 6 non-compliant applications.
Broadly, our empirical study leads to the following takeaways. Overall, 98.32% of the 358 Android applications that we analyzed passed Cardpliance’s PCI DSS tests, which shows that the risk of financial loss due to insecure behaviors in mobile applications may not be as wide-spread as predicted. In particular, we did not find any evidence of applications sending payment information over the network in plaintext, over vulnerable SSL connections, or insecurely exposing the payment information via inter-component communication channels. However, we identified 6 applications that combined have nearly 1.5 million downloads on Google Play violating PCI DSS requirements by storing or logging credit card numbers in plaintext (5/6), persisting credit card verification codes (3/6), and not masking credit card numbers when displaying (2/6). These applications are placing the users and potentially their customers at unnecessary risk for fraud due to their non-complying behaviors.
This paper makes the following contributions:
- We encode PCI DSS requirements for mobile applications into static program analysis rules. These rules are largely captured using data flow analysis, but the existence of method calls on the corresponding control flow paths play a key role.
- We study a set of 358 applications known to prompt the user for credit card information. We find 6 applications that violate PCI DSS requirements.
- We propose a set of best practices for mobile application developers processing payment information. These suggestions distill hundreds of pages to PCI DSS standards specification into key areas relevant to mobile apps.
We note that an entire industry of products exists to enable developers to identify individual PCI DSS violations in their own code [9,19,21,23,28]. By contrast, Cardpliance is to our knowledge the first system to identify violations across a significant portion of an entire industry with no prior knowledge of which apps might even handle credit card information. In addition to helping Android application developers be aware of unintentional PCI DSS violations, Cardpliance can also be used by Google to triage and investigate flaws in applications as they are submitted to the Play Store. Google could also show the output of Cardpliance in the Play Store’s developer console.
The remainder of this paper proceeds as follows: Section 2 describes relevant security requirements from PCI DSS. Section 3 overviews our approach to testing compliance with these requirements. Section 4 describes the design and implementation of Cardpliance. Section 5 uses Cardpliance to study popular applications accepting credit card information. Section 6 presents a set of best practices for mobile application developers processing payment information. Section 7 describes related work. Section 8 concludes.
2 PCI Data Security Standard
In the early 2000s, major credit card companies faced a crisis of payment fraud that was enabled by the widespread adoption of online financial transactions. As a result, the Payment Card Industry (PCI) released the first version of its Data Security Standard (DSS) in December 2004. PCI DSS [6] now regulates all financial systems seeking to do business with PCI members, which includes all major credit card companies. This standard applies to all computing systems that accept card payment, as well as those that store and process sensitive cardholder data. It defines a series of security measures that must be taken for such systems, including the use of firewalls and anti-virus software.
Not all PCI DSS security measures apply to mobile applications installed on consumer devices. Based on our expertise in mobile application security, we systematically reviewed the 139 pages of PCI DSS version 3.2.1 to determine which regulations apply. For example, mobile applications are payment terminals where a consumer may enter a credit card information into either their own device or the device of a merchant. In contrast, mobile applications are not used as back-end payment processing systems. We then looked for the different types of sensitive information referenced within the standard. We found that PCI DSS distinguishes between cardholder data (CHD) and sensitive account data (SAD), which impacts software processing, as shown in Table 1.
Next, we reviewed the standard for requirements relating to mobile applications. We identified the following six relevant PCI DSS requirements:
**Requirement 1 (Limit CHD storage and retention time):** PCI DSS Section 3.1 states:
> Limit cardholder data storage and retention time to that which is required for business, legal, and/or regulatory purposes, as documented in your data retention policy. Purge unnecessarily stored data at least quarterly.
Therefore, mobile applications should minimize the situations when the credit card number and other CHD values are written to persistent storage. Ideally, CHD is never written, but if it is, the applications need a method to remove it. CHD should also never be written to shared storage locations, e.g., SDcard in Android, as it may be read by other applications.
Table 1: Types of payment information relevant to credit cards
<table>
<thead>
<tr>
<th>Information</th>
<th>Type</th>
<th>Storage Permitted</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>PAN</td>
<td>CHD</td>
<td>Yes</td>
<td>Primary Account Number, 16 digits, on front of card.</td>
</tr>
<tr>
<td>Cardholder Name</td>
<td>CHD</td>
<td>Yes</td>
<td>Cardholder’s name, on front of card</td>
</tr>
<tr>
<td>Expiry Date</td>
<td>CHD</td>
<td>Yes</td>
<td>Card expiration date, displayed as MM/YY</td>
</tr>
<tr>
<td>Service Code</td>
<td>CHD</td>
<td>Yes</td>
<td>3 digit code, each digit has own service code assignment</td>
</tr>
<tr>
<td>Full Track Data</td>
<td>SAD</td>
<td>No</td>
<td>Sensitive data stored on magnetic strip or on a chip</td>
</tr>
<tr>
<td>CAV2, CVC2, CVV2, CID</td>
<td>SAD</td>
<td>No</td>
<td>Three or four digit code on back of card</td>
</tr>
<tr>
<td>PIN</td>
<td>SAD</td>
<td>No</td>
<td>Pass code that verifies the user during transactions</td>
</tr>
</tbody>
</table>
CHD = Card Holder Data; SAD = Sensitive Account Data
Applications also do not have the ability to delete contents written to Android’s logcat logging infrastructure.
**Requirement 2 (Restrict SAD storage):** PCI DSS Section 3.2 states:
*Do not store sensitive authentication data after authorization (even if encrypted). If sensitive authentication data is received, render all data unrecoverable upon completion of the authorization process.*
Therefore, SAD values such as full track data (magnetic-stripe data or equivalent on a chip), card security codes (e.g., CAV2/CVC2/CVV2/CID), PINs and PIN blocks should never be written to persistent storage, even if it is encrypted or in a location only accessible to the application.
The standard states that data sources such as incoming transaction data, logs, history files, trace files, database schemes, and database contents should not contain SAD. While we expect few mobile applications ask for full track data, subsets of SAD are relevant. Furthermore, mobile applications should be careful not to include SAD in debugging logs and crash dumps.
**Requirement 3 (Mask PAN when displaying):** PCI DSS Section 3.3 states:
*Mask PAN when displayed (the first six and last four digits are the maximum number of digits you may display), so that only authorized people with a legitimate business need can see more than the first six/last four digits of the PAN. This does not supersede stricter requirements that may be in place for displays of cardholder data, such as on a point-of-sale receipt.*
The standard warns that the display of the full PAN on computer screens, mobile UI, payment card receipts, faxes, or paper reports can aid unauthorized individuals in performing unwanted activities. Therefore, after the user enters the credit card number, the application should mask it before displaying (e.g., on a subsequent UI screen).
**Requirement 4 (Protect PAN when Storing):** PCI DSS Section 3.4 states:
*Render PAN unreadable anywhere it is stored – including on portable digital media, backup media, in logs, and data received from or stored by wireless networks. Technology solutions for this requirement may include strong one-way hash functions of the entire PAN, truncation, index tokens with securely stored pads, or strong cryptography.*
This requirement supplements Requirement 1 with restrictions specifically for the credit card number (PAN). If it is written at all, some sort of protection is required.
**Requirement 5 (Use secure communication):** PCI DSS Section 4.1 states:
*Use strong cryptography and security protocols to safeguard sensitive cardholder data during transmission over open, public networks (e.g., Internet, wireless technologies, cellular technologies, General Packet Radio Service [GPRS], satellite communications). Ensure wireless networks transmitting cardholder data or connected to the cardholder data environment use industry best practices to implement strong encryption for authentication and transmission.*
From the perspective of mobile applications, all network connections should use TLS/SSL. Furthermore, the application should not remove the server authentication checks, which prior work [17] has identified is a common vulnerability in mobile applications.
**Requirement 6 (Secure transmission of PAN through messaging technologies):** PCI DSS Section 4.2 states:
*Never send unprotected PANs by end-user messaging technologies (for example, e-mail, instant messaging, SMS, chat, etc.).*
Again, specific additional restrictions are made for the credit card number (PAN). That is, mobile applications should not pass the PAN to APIs for sending SMS messages. Additionally, Android allows sharing data with other messaging applications using its Intent message-based inter-component communication (ICC). Such messages should be protected.
While many studies have investigated vulnerabilities in mobile applications, we are unaware of studies focused on credit card information. Such vulnerabilities represent PCI DSS violations and hence are of significant importance. However, programmatically investigating the relevant PCI DSS requirements is nontrivial, presenting the following key challenges.
- **Credit card information is often collected via text input.** There is no clearly-defined API that identifies when the user enters a credit card number. These inputs must be identified and linked to control and data flow graphs.
- **The relevant PCI DSS requirements are context-sensitive.** Simple data-flow analysis is insufficient. For example, some types of credit card information can be stored or transmitted if it is obfuscated.
- **The relevant PCI DSS requirements are imprecise.** The requirements often refer to broad approaches to information protection such as rendering the PAN “unreadable.” There are many ways in which developers can achieve these goals.
Cardpliance addresses these challenges using a collection of tailored static program analysis tests. Where possible, we leverage existing open source projects that embody knowledge gained from a decade of mobile application analysis. Specifically, we build upon UiRef [8] to infer the semantics of text input and Amandroid [19] (also called Argus-SAF) to perform static data flow analysis. Our analysis also leverages concepts from MalloDroid [17] to identify SSL vulnerabilities and StringDroid [39] for identifying the URL string used to make network connections. Combining these existing techniques to create specific PCI DSS checks requires careful construction and represents a unique contribution.
Figure 1 provides a high-level overview of Cardpliance’s approach to identifying PCI DSS violations in mobile applications. The first step is to identify which applications ask users to enter credit card information. While we build upon UiRef for user interface analysis, the analysis requires injecting a code executing the repackaged application. This process is too heavyweight for application discovery. Therefore, we use a two-phase application filter, first using a lightweight keyword-based search of the strings used by the application, then using UiRef to confirm that the application actually asks the user to enter credit card information (e.g., the terms could have been used in some other context).
The next phase is the Data Dependence Graph (DDG) extraction. A key feature of Amandroid is to produce graphs upon which different static analysis tasks can be performed. This approach encapsulates traditional static program analysis within the core Amandroid tool and allows users of Amandroid to focus on their goals as graph traversal algorithms. However, we found that the latest version of Amandroid did not include all of the program contexts that were needed for our PCI DSS tests. First, we use information from UiRef to annotate UI input widgets as being related to credit card information. Second, we enhance how Amandroid handles OnClickListeners to correctly track data flows from UI input.
The six PCI DSS tests capture the relevant requirements described in Section 2. Described in detail in Section 4, these tests consider the different uses of cardholder data (CHD) and sensitive account data (SAD) listed in Table 1. Each test defines sets of sources and sinks for Amandroid’s taint analysis; however, the tests require context beyond traditional taint analysis. First, Amandroid uses method signatures as sources and sinks, whereas Cardpliance only considers a subset of method calls that are parameterized with specific concrete values (e.g., UI widget references from UiRef). Second, three of the six tests are designed to not raise an alarm if all paths from a specific source to a specific sink invoke a method that makes the data flow acceptable (e.g., masking or obfuscating the credit card number). Therefore, Cardpliance includes additional traversal of the DDG.
Finally, due to the imprecision of PCI DSS, several of the tests are inherently heuristic. In such cases, we err on the side of being security conservative, preferring false positives over false negatives and invalidating the false positives through manual inspection. Therefore, Cardpliance serves as a tool to drastically reduce the effort of a manual auditor, providing key information necessary to make a certification determination. Section 5 describes our experiences manually reviewing flagged applications with the JEB decompiler. Note that we did not perform dynamic analysis of the flagged applications because many of them required social security numbers to register for accounts or for us to be in a physical location to test (e.g., road toll applications).
### 4 Cardpliance
Android application analysis is a well-studied problem. Open-source analysis tools such as FlowDroid [9], Amandroid [19], and DroidSafe [21] capture much of Android’s runtime complexity, including application lifecycles and callbacks from code executing system processes. We chose to build on top
of Amandroid, also called Argus-SAF,\(^1\) because it a) is being actively maintained, b) has a design that is easy to extend, and c) outputs convenient graphs for use by novel analysis. This section is split into two parts: First, we explain key concepts in Amandroid and how we configured it for our analysis. Second, we describe our tests that capture the relevant PCI DSS requirements described in Section 2. This second part captures a key technical contribution of this paper.
### 4.1 DDG Extraction
The Cardpliance tests are graph queries on Amandroid’s Data Dependence Graph (DDG). Amandroid performs flow- and context-sensitive static program analysis on .apk files. It analyzes each Android component (e.g., Activity component) separately and then combines the per-component analysis to handle inter-component communication (ICC). As such, program analysis timeouts are defined at the component level (as we discuss in Section 5, we use a timeout of 60 minutes).
Amandroid is primarily focused on data flow analysis. It calculates points-to information for each instruction in the control flow graph, storing it in a Points-to Analysis Results (PTAResult) hash map. It also keeps track of ICC invocations in a summary table (ST). Amandroid then produces an Interprocedural Data Flow Graph (IDFG) for each component, which combines the Interprocedural Control Flow Graph (ICFG) with the PTAResult for that component. It then generates an Interprocedural Data Dependency Graph (IDDG), which contains the same nodes as the IDFG, but the edges are the dependencies between each object’s definition to its use. Finally, a DDG for the entire application is created by combining each component’s IDDG and the ST.
Amandroid uses the DDG to perform taint analysis. Given a set of taint sources and taint sinks, Amandroid marks the sources and sinks in the DDG and computes the set of all paths between them. The list of paths from sources to sinks is stored in a Taint Analysis Result (TAR) structure. Amandroid allows the user to define sources and sinks via text strings of method signatures in a configuration file.
Cardpliance analyzes how applications handle credit card information entered by the user into text fields. Applications access this text via the `TextView.getText()` method. However, Cardpliance needs to determine which `TextView` objects correspond to the UI widgets that collect different types of credit card information. To acquire the `TextView` object, Cardpliance uses `Activity.findViewById(R.id.widget_name)` as a taint source. The analysis will taint the returned `TextView` and the subsequent string from `TextView.getText()`. Furthermore, since the DDG contains points-to information, the PCI DSS tests can use Amandroid’s `ExplicitValueFinder.findExplicitLiteralForArg` method to determine the integer value passed to the taint source. It then uses the resource IDs of credit card information widgets identified by UiRef \(^8\) to determine the types of information flowing to each sink.
However, applications frequently call `Activity.findViewById()` to assess many different UI widgets. Therefore, simply defining it as a taint source will cause Amandroid’s taint analysis to needlessly compute taint paths for many irrelevant sources. To address this problem, Cardpliance implements a custom source and sink manager that refines the taint sources to just those `Activity.findViewById()` instructions that are passed an integer in a list precomputed by UiRef. This process involves using the PTAResult hash map while marking taint sources. In doing so, we significantly reduce the time to analyze applications.
Additionally, since one of Cardpliance’s tests uses `View.setText()` as a taint sink, we perform a similar optimization in the custom source and sink manager. In this case, we backtrack in the DDG to the definition site of the `View` object and identify the corresponding call to `Activity.findViewById()`. We then similarly resolve the integer resource ID. If the ID is in a predefined list (defined via a heuristic for the test), the call to `View.setText()` is defined as a taint sink.
Finally, we had to patch Amandroid’s control flow analysis to properly track the use of `View` objects obtained in `OnClickListener` callbacks. We found that many applications declare the `OnClickListener` of a `View` as an anonymous inner class. In such cases, Amandroid did not capture the data flow initiated by the button click. We fixed this issue by adding a dummy edge from the point where the `OnClickListener` was registered to the entry point of the corresponding `OnClickListener.onClick()` method.
### 4.2 PCI DSS Tests
At a high level, Cardpliance uses Amandroid’s taint analysis result (TAR) to identify potential PCI DSS violations. However, the TAR does not consider context at the sources and sinks, or all different paths between the sources and sinks. Cardpliance uses the DDG to identify specific instructions as sources and sinks based on constant values available from the PTAResult hash map. It then calculates all paths between those specific source and sink instructions, determining if specific conditions occur (e.g., calling an obfuscation method).
#### 4.2.1 Analysis Approach
The DDG is a directed acyclic graph \((V,E)\) where the set of vertices \(V\) are program instructions and the set of edges \(E\) represent def-use dependencies between vertices \((v_i, v_j)\). We say there exists a path between \(v_s\) and \(v_k\) (denoted \(v_s \leadsto v_k\)) if there is a sequence of edges \((v_s, v_{s+1}), (v_{s+1}, v_{s+2}), \ldots (v_{k-1}, v_k)\). We refer to a specific path \(p\) from \(v_s\) to \(v_k\) as \(v_s \xrightarrow{p} v_k\).
\(^1\)http://pag.arguslab.org/argus-saf
Each PCI DSS test is defined with respect to instructions invoking three sets of methods: source methods (S), sink methods (K), and required methods (R). S and K are traditional sources and sinks for taint analysis. Whereas Amandroid’s sources and sinks are method signatures, some of Cardpliance’s sources and sinks are context-sensitive. For example, an instruction that calls `Activity.findViewById(int)` is only a source if the argument is an integer from a list of resource IDs identified by UiRef as requesting credit card information.
In contrast to S and K, R places requirements on the data flow path. Informally, R defines a set of methods that should be called on the data flow path (e.g., a string manipulation method that could mask characters). If no methods from R exist on the path, then a potential violation is raised.
We now describe the general template used by each test to generate sets of potential violations. For simplicity, we say that instruction \( v \in V \) is in \( S, K, \) or \( R \) if the instruction \( v \) calls a method in one of those sets, potentially parameterized with the correct constant values. Then, for \( v_s, v_k, v_R \in V \), the test produces paths as potential violations as follows:
\[
\{ (v_s \rightarrow v_k) | v_s \in S, v_k \in K, v_R \rightarrow v_k \land (\exists v_r \in p | v_r \in R) \}
\]
That is, even if \( v_s \rightarrow v_k \), it is not a violation if all paths include an instruction \( v_s \) that is in \( R \). Note that not all tests use \( R \) and therefore the logic for these tests skips the second term in the conjunction. However, this is logically equivalent to \( R = \emptyset \), which will cause the term to always be true.
### 4.2.2 Test Implementation
The remainder of this section describes our six PCI DSS tests with respect to \( S, K, \) and \( R \). In doing so, we reflect on the relevant requirements described in Section 2. We also describe implementation-specific considerations for each test. An overview of the tests is provided in Table 2.
**Test T1 (Storing CHD):** Requirement 1 in Section 2 states that storage of cardholder data (CHD) should be limited, and if it is stored, there should be a mechanism to delete if after a period of time. Determining all of the ways in which persistent data can be deleted is not practical to detect using static program analysis. Therefore, Test T1 takes a security-conservative approach and identifies whenever CHD is written to persistent storage. As such, Test T1 is more of a warning than a violation of PCI DSS. However, it is useful as a coarse metric and can bring potentially dangerous situations to the attention of a security analyst.
Test T1 captures a key program analysis primitive that is needed by the other tests: data flow analysis from specific UI inputs. Amandroid provides a Taint Analysis Result (TAR) structure that contains a superset of all of the paths identified in all of the tests. Test T1 filters the TAR based on the sources and sinks listed in Table 2. Note that Test T1 only considers the sources that call `Activity.findViewById(int)` with resource IDs corresponding to CHD. We further reduce the text input source to just the credit card number (PAN), as there is the potential for ambiguity when identifying the other fields (e.g., cardholder name vs. another name field). The custom source and sink manager described in Section 4.1 only limits the analysis to credit card related data, which includes both CHD and SAD. Therefore, we again use Amandroid’s `ExplicitValueFinder`, but within a different phase of the analysis. The data persistence method (DPM) sink methods listed in the table do not require special consideration. Once these concrete sources and sinks are identified, we traverse the DDG to identify all paths between them.
**Test T2 (Storing SAD):** Requirement 2 in Section 2 states that sensitive account data (SAD) should never be written to persistent storage, including logs. From the mobile application perspective, the only SAD that users will enter into text fields is the three or four digit CVC code written on the physical card. Therefore, Test T2 only needs to consider `Activity.findViewById(int)` sources that are passed resource IDs corresponding to CVC-related fields. The remainder of the analysis is identical to Test T1. Note that unlike Test T1, the existence of a data flow path directly represents a PCI DSS violation.
**Test T3 (Masking Credit Card Number):** Requirement 3 in Section 2 states that the only time the application should display the full credit card number (PAN) is when the user is entering it in the text field. All other times the credit card number is displayed, it should be masked, showing at most the first six and last four digits of the number.
Test T3 requires additional sophistication in the static program analysis algorithm. First, it includes \( R \), the set of required methods. Recall that a violation does not occur if all paths from the sources to the sinks include an instruction that invokes a method in \( R \). In this case, we define a set of PAN masking methods (PMM), listed in Table 2, that represent different ways in which the application developer may have masked the credit card number. While the developer may choose to use other string manipulation methods, this set is conservative and will raise an alarm for manual review by a security analyst. Of course, this set can be easily expanded as additional string manipulation methods are discovered.
Second, Test T3 considers not only textual user input as taint sources, but also input from the network. For example, an application may retrieve the credit card number from the server and display it for the user. Such cases should also be masked. However, in this case, it is nontrivial to detect which input data is the credit card number. While the semantics of JSON key-value fields could potentially be used [26, 32], we elected to use a simpler heuristic that filters tainted paths at the sink. Specifically, we extract a list of all resource IDs of UI widgets that exist on a UI screen that also contains the text “Credit Card.” Our intuition is that mobile application UI screens are generally purpose-specific and the other displayed information is likely related. This classification allows
the static program analysis to only consider `View.setText()` methods as taint sinks if they correspond to objects that were retrieved using `Activity.findViewById()` and a resource ID from that set. As mentioned in Section 4.1, we leverage the Explicit-ValueFinder within the custom source and sink manager to perform this refinement. We, therefore, leverage the `View.setText()` sinks in Amandroid’s TAR structure, knowing that they have been refined as such.
Once Test T3 has filtered the TAR with respect to the sources and sinks described above, it computes all paths between them using the DDG. We then remove paths that contain a method from R. The resulting set of paths are potential violations of the PCI DSS and are made available for manual review.
**Test T4 (Storing Non-Obfuscated Credit Card Number):** Requirement 4 in Section 2 states that the credit card number (PAN) should always be protected if it is stored by the mobile application. The PCI DSS standard has some flexibility in how the number is protected, but it offers suggestions including one-way hash functions and cryptography. Requirement 4 refines Requirement 1 specifically for the credit card number, and since our Test T1 only considers the credit card number, and not the other CHD values, it might seem that both Test T1 and Test T4 are not needed. However, we wanted to include both, because Test T1 will capture all cases when the credit card number is written to persistent storage, whereas Test T4 only raises an alarm when there is not an obfuscation method on the data flow path. Put another way, Test T1 is designed to be a warning for closer inspection, whereas Test T4 is designed to detect violations.
Given the similarity to Test T1, Test T4 follows the same implementation pattern. However, Test T4 includes a set R of required obfuscation methods (OM), as listed in Table 2. These methods include calls to common encryption and message digest functionality in Java, as listed on the Android developer’s website [2]. Similar to Test T3’s PAN masking methods, we do not seek to enumerate all possible cryptography libraries. Nonstandard libraries should be reviewed and can potentially be added to the list in the future. For the Cipher.doFinal() method, we validate that the Cipher object is initialized with an ENCRYPT_MODE. In the future, additional cryptography checks [14,25] could be incorporated. Note that false negatives resulting from this limitation of Test T4 would still raise a warning for Test T1, which reports any write to storage, obfuscated or not. Finally, Test T4 uses the same strategy as Test T3 for ensuring all paths from the filtered sources and sinks contain a method from R.
**Test T5 (Insecure Transmission):** Requirement 5 in Section 2 states that mobile applications should always use TLS/SSL when transmitting cardholder data. There are two ways in which an application can fail to properly use TLS/SSL: (1) send data via HTTP URLs, (2) invalidate certificate checks when sending data via HTTPS URLs.
As shown in Table 2, Test T5 uses `OutputStreamWriter.write()` as taint sinks to filter the TAR. However, these sinks may also be used for file writes. Unfortunately, the `URLConnection` object used to create the output stream will not be on the tainted path for the credit card number (so R cannot be used). Therefore, we separately walk backward on the DDG from the taint sink to find the `URLConnection` object used to create the output stream object. We then use Amandroid’s ExplicitValueFinder to determine the argument passed to the corresponding URL initialization method (`URLConnection.init(String)`). We then determine if the string is an HTTP or HTTPS URL. If an HTTP URL is used, an alarm is raised.
If an application has an HTTPS URL as a taint sink, we also check if the application contains a vulnerable TLS/SSL configuration. To do so, we leverage Amandroid’s existing API Misuse module, which has a configuration option for
---
**Table 2: PCI DSS tests defined by source (S), sink (K), and required (R) methods on data flow paths in the DDG.**
<table>
<thead>
<tr>
<th>Test</th>
<th>Identifies</th>
<th>S</th>
<th>K</th>
<th>R</th>
</tr>
</thead>
<tbody>
<tr>
<td>T1</td>
<td>Storing CHD</td>
<td>Activity.findViewById(ID_CC)</td>
<td>OutputStreamWriter.write()</td>
<td>PMM</td>
</tr>
<tr>
<td>T2</td>
<td>Storing SAD</td>
<td>Activity.findViewById(ID_CC)</td>
<td>OutputStreamWriter.write()</td>
<td>-</td>
</tr>
<tr>
<td>T3</td>
<td>Not Masking Credit Card Number</td>
<td>Activity.findViewById(ID_CC),</td>
<td>View.setText()</td>
<td>OM</td>
</tr>
<tr>
<td></td>
<td></td>
<td>URLConnection.getInputStream()</td>
<td></td>
<td></td>
</tr>
<tr>
<td>T4</td>
<td>Storing Non-Obfuscated Credit Card Number</td>
<td>Activity.findViewById(ID_CC)</td>
<td>DPM</td>
<td>OM</td>
</tr>
<tr>
<td>T5</td>
<td>Insecure Transmission</td>
<td>Activity.findViewById(ID_CC)</td>
<td>OutputStreamWriter.write(),</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>OutputStream.write()</td>
<td></td>
</tr>
<tr>
<td>T6</td>
<td>Sharing Non-Obfuscated Credit Card Number</td>
<td>Activity.findViewById(ID_CC)</td>
<td>Intent.putExtra(),</td>
<td>OM</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>SmsManager.sendTextMessage()</td>
<td></td>
</tr>
</tbody>
</table>
COMMUNICATION_LEAKAGE. Specifically, this check looks for insecure implementations of SSLSocketFactory and a TrustManager that uses the ALLOW_ALL_HOSTNAME_VERIFIER flag. Note that this analysis is not context-sensitive to a specific taint sink, as these options are often set globally for an application. Therefore, there is a possibility for false positives if an application uses different SSL configurations for different network connections.
Test T6 (Sharing Non-Obfuscated Credit Card Number): Requirement 6 in Section 2 states that credit card numbers should be protected if they are shared outside of the application. Therefore, we consider both SMS APIs and Android’s inter-component communication (ICC) mechanism that allows execution to span applications. Similar to Test T4, this test determines if all paths from sources and sinks include a call to an obfuscation method, as shown in Table 2.
Identifying taint sinks for SMS is straightforward due to Android’s runtime API SmsManager.sendTextMessage(). Identifying ICC taint sinks is more complex. First, ICC is commonly used within an application. To simplify the analysis, we assume that Intent messages with explicit destinations (i.e., specify the exact target component name) are used for ICC within an application, and implicit destinations (i.e., use “action” strings) are used for ICC between applications. Second, the Intent objects used for ICC are populated in steps. We use Intent.putExtra() as a taint sink filter for the TAR. We then backtrack the DDG to find the Intent object creation and use Amandroid’s ExplicitValueFinder to identify if it is an implicit or explicit Intent. If it is an implicit Intent and the action value is ACTION_SEND, we use the Intent.putExtra() call as a taint sink, as this is the action string used to share information between applications. Finally, we follow a similar process as Test T4 to ensure that all paths between the sources and sinks include a required obfuscation method from R. Paths failing this requirement will raise an alarm.
5 PCI DSS Compliance Study
Our primary motivation for creating Cardpliance was to analyze whether mobile applications are mishandling payment information. The goal of this study is to gauge the impact of PCI DSS non-compliance on real-world users. In this section, we use Cardpliance to analyze popular applications from Google Play for potential PCI DSS violations and present case studies based on our findings.
As Cardpliance uses static analysis to vet application’s compliance of PCI DSS requirements, it is subject to the same limitations as static analysis. In particular, static analysis may provide an over-approximation of application behaviors that may result in false alarms. Therefore, we manually validate data flows that Cardpliance flags as potential PCI DSS violations to determine whether the application is actually violating PCI DSS requirements. Note that the goal of validation is to determine whether the application is violating PCI DSS requirements, not to comprehensively determine whether every data flow identified by static analysis is a true positive or false positive. Therefore, a true positive denotes that the application contains a PCI DSS violation while a false positive denotes that none of the data flows flagged by static analysis were valid due to errors in the underlying tooling.
5.1 Dataset Characteristics
To select our dataset, we downloaded the top 500 free applications (“top_selling_free” collection) across Google Play’s 35 application categories in May 2019, which resulted in an initial dataset of 17,500 applications. To determine which applications request payment information, we disassembled the dataset and performed a keyword-search on the resource files for terms that describe payment card numbers (e.g., credit card number, debit card number, card number). The list of terms was obtained from the synonym list in UiRef [8] for “credit card number.”^2 This keyword-based triaging flagged 1,868 applications as potentially requesting credit card information, which reduced the dataset by 89.3% (15,632/17,500). Note that this triaging may provide an under-approximation of the total number of applications requesting credit card numbers due to the comprehensiveness of the keyword-based list. However, since this keyword list was used by prior work [8] to identify 4,433/50,162 (8.83%) applications in Google Play were asking users for credit card information, we believe it is suitable for our study. We leave it as future work to construct a comprehensive multi-language vocabulary of terms that refer to credit card numbers.
As discussed previously, simply containing a string that matches a credit-card related keyword does not imply the application accepts credit card numbers from the user. Therefore, we use UiRef to determine when an application takes credit card numbers as input. We ran UiRef on the refined dataset and found that 807 applications failed during reassembly due to errors in ApkTool.\footnote{https://ibotpeaches.github.io/Apktool/} UiRef failed to extract layouts from an additional 110 applications. Of the remaining 951 applications, UiRef identified that 442 applications containing input widgets that request credit card numbers.
We ran Cardpliance on the 442 applications that request payment information. We performed the analysis on a virtual machine running Ubuntu 18.04 on the VMware ESXi 6.4 hypervisor with an Intel(R) Xeon(R) Gold 6130 2.10GHz machine with 320 GB RAM and 28 physical cores. We con-
As discussed above, we used a lightweight heuristic to identify applications containing input widgets that directly request payment information. In total, Cardpliance successfully analyzed 80.99% (358/442) of the triaged application dataset. Of the 19.01% (84/442) applications that failed analysis, 3.84% (17/442) applications contained components that exceeded the timeout and 15.15% (67/442) applications could not run due to errors in the underlying static analysis framework Amandroid.
**Finding 1:** At least 2.5% of popular free Android applications on Google Play directly request payment information. As discussed above, we used a lightweight heuristic to identify which applications were mentioning credit card numbers and then used UiRef to resolve semantics. We found that 442 applications contain input widgets that directly request payment information from users (i.e., credit card numbers). This reduction in the scope of analysis makes deploying the deeper and more time-consuming static analysis checks provided by Cardpliance feasible at scale. Note that this is a conservative lower-bound estimate, as we could not analyze 917 applications due to errors in ApkTool and UiRef.
**Finding 2:** Cardpliance can analyze an application with a mean and median runtime of 334 minutes and 179 minutes, respectively. Figure 2 plots the runtime versus the number of components within an application. Note that the x-axis consists of the 358 applications sorted in ascending order based on the number of components within the application. The component counts within applications ranged from 0 to 315 components where 54 was the average number of components per application. As shown in Figure 2, an increased number of components within an application generally resulted in a longer runtime. Further, it saturates after the 170th application where there were 40 components. The mean and median runtime for applications was 334 and 179 minutes per application, respectively.
Cardpliance’s runtime significantly increased over the stack version of Amandroid [19] due to the inclusion of frequently used user input sources and sinks, such as `Activity.findViewById(int)` and `View.setText()`. For example, an application may only have the source `TelephonyManager.getDeviceID()` once within the application, but it may likely have the source `Activity.findViewById(int)` multiple times throughout the application, which significantly increases the number of sources that require tracking. Therefore, in order to scale Cardpliance to an entire market, a lightweight keyword-based filter is required (as shown in Figure 1). Note that if the filter is not comprehensive, non-compliant applications may not be discovered. We discuss this limitation further in Section 5.7.
Finally, as discussed above, Cardpliance successfully analyzed 358 applications. Those 358 applications spanned 32 application categories with the majority coming from the FOOD_AND_DRINK (51), SHOPPING (43), FINANCE (39), and MAPS_AND_NAVIGATION (37). The average download count for these applications was 1.25 million downloads and an average rating of 3.8 stars out of 5. The most popular application Wish - Shopping Made Fun (`com.contextlogic.wish`) in the group had over 100 million downloads. The dataset consisted of other widely used applications, such as Lyft (`me.lyft.android`), CVS Caremark (`com.caremark.caremark`), and the WWE application (`com.wwe.universe`).
### 5.2 Validation Methodology
We opt for manual code review instead of manually running the application due to complexities of reaching screens that request payments (e.g., creating accounts that require disclosure of sensitive data, requiring referral codes, or relying on an existing balance/debt). The manual code review for validation was performed by one student author of this paper, who has more than 6 years of academic and industrial experience programming Java and developing Android applications. For each candidate application flagged by Cardpliance, we begin by decompiling the application with the JEB decompiler [35] to obtain the source code. We then group the data flows that were marked as potential PCI DSS violations by the PCI DSS requirement that it violated from Section 4).
The goal of validation is to verify that the data flow actually occurs within the code and was not a false alarm due to the imprecision of the underlying tooling. Note that for all of the validation checks, we stop verification if we discover that the result is a false alarm and begin validating the next data flow within the PCI DSS requirement group. If all of the data flows within the PCI DSS requirement group are erroneous, we mark the application as a false positive for that PCI DSS requirement group. However, if we successfully validate the data flow, we mark the application as containing a PCI DSS violation and start analysis on the next PCI DSS requirement group for that application.
We begin by validating whether the semantics linked to the input widget of the data flow was correctly resolved by UiRef. We start at the source of the data flow (e.g., `Activity.findViewById(int)` method) and resolve the integer parameter of the method invocation to the resource identifier in the R.java file.
of the source code. We identify in the input widget referenced by the resource identifier within the source code and validate that UiRef made the correct resolution of semantics (i.e., credit card number, CVC). If UiRef was incorrect, we mark the data flow as erroneous and begin validating the next data flow for that requirement group. If UiRef resolved the correct semantics, we continue the following validation process.
Next, we trace through the source code from the source of the data flow to the sink to determine that the data flow exists within the source code. For example, if the data flow denotes that non-obfuscated credit card numbers are being stored, we verify that the data retrieved from the input widget accepting credit card numbers is actually written to disk without being encrypted or through some other obfuscation library. If the data flow does not occur within the source code due to imprecisions of static analysis, we mark it as an error and continue analysis as discussed above. For example, we found that the Context object of the Activity.findViewById(int, Context) method was frequently tainted and led to imprecision.
Finally, for validating potential SSL vulnerabilities that lead to insecure transmission, we searched for SSLSocketFactory and TrustManager classes within the source code and manually checked whether the implementation was performing improper certificate validation. We then searched for the use of those classes throughout the source code and determined whether payment information was sent over connections using these vulnerable classes.
5.3 Compliance: The Good
In this section, we report the positive findings from our analysis of the 358 applications analyzed by Cardpliance. We believe that these findings provide significant value and insight to the community.
Finding 3: Around 98.32% of the 358 applications pass Cardpliance’s PCI DSS compliance tests. Out of the 358 applications, Cardpliance identified that 318 applications did not violate any of the PCI DSS compliance checks. After manual validation of Cardpliance’s findings, we found that 352 applications in total were not violating any PCI DSS check that we modeled. This result in itself is surprising due to the vast amount of prior research that highlights the poor state of Android application security [9, 15, 16, 22]. The fact that our tool reporting 98.32% of applications in our dataset handling payment information are maintaining these data security standards shows that the risk of financial loss due to insecure behaviors in mobile applications might not be as wide-spread. Further, as the majority of applications seem to be handling payment information correctly, it demonstrates that securely processing payment information and meeting PCI DSS requirements within a mobile application is largely an obtainable effort.
Finding 4: Applications are correctly using HTTPS instead of HTTP to transmit payment information. Cardpliance did not identify any applications that transmitted payment information insecurely in plaintext over HTTP in Test T5. The adoption of HTTPS over the insecure HTTP is a great move in the right direction, as a prior study [17] showed that 93.4% of URLs in Android applications were HTTP and another study showed poor SSL adoption in financial applications in developing countries [31]. The fact that we did not find any applications sending payment information over HTTP means that the effort to push HTTPS adoption has been working for transmitting sensitive information, such as payment information. Note that as Cardpliance is a static analysis-based approach, we cannot determine whether payment information is sent insecurely if the destination URLs are not present in the code or resource files. This limitation is shared by practically all prior work on this same problem [17, 31].
Finding 5: Applications are correctly performing hostname and certificate verification when sending payment information over SSL connections. Cardpliance identified 20 applications that were handling payment information and also contained vulnerable SSL implementations within their codebase. Out of these 20 applications, we did not find evidence that any payment information was sent over vulnerable SSL connections during manual verification. The majority of the code for the vulnerable SSL implementation was dead code or contained build flags that disabled that functionality. Overall, this finding demonstrates the positive impact on Android application security by prior research on SSL misconfigurations [17] and Google’s efforts.4
Note that we did find that the Harris Teeter application (com.harristeeter.htmobile) sends profiling and usage data to Dynatrace over a vulnerable SSL connection, which results from a misconfiguration when interfacing with the Dynatrace library. This issue of sending non-payment information indicates that vulnerable SSL problems still exist. As recommended in Section 6, developers should never modify SSLSocketFactory or TrustManager within the application. Further, third-party libraries that applications are including should also be vetted, as they can override the TrustManager used by the default SSLSocketFactory, which could result in all SSL connections within the application becoming vulnerable.
Finding 6: Applications are not insecurely sharing payment information via SMS or with other applications via ICC channels. Cardpliance did not identify any applications transmitting payment information to other applications using SMS APIs or implicit intents without obfuscating the data in Test T6. Prior research [22] highlighted that a wide range of private data was being leaked through ICC, such as location data and device identifiers. In this work, we demonstrate that credit card numbers are not being insecurely exposed through the use of implicit intents. One potential mitigating factor may have been that Android banned binding to services with implicit intents since Android 5.0 [1].
4https://support.google.com/faqs/answer/6346016?hl=en
### 5.4 Non-Compliance: The Bad and the Ugly
After validation of the 40 applications that Cardpliance flagged as having potential PCI DSS violations, we found that 6 applications were non-compliant with PCI DSS requirements. Table 3 lists all of the applications that contain PCI DSS violations. While the fact that only 1.67% of the 358 credit card number collecting applications are non-compliant with PCI DSS requirements does not seem surprising in itself, the fact that any applications are non-compliant is troublesome. The impact of non-compliance is substantial to both the end-users, app developers, payment processors and issuing banks. For end-users, non-compliance may result in significant financial loss due to fraud if payment information is insecurely exposed. For companies, non-compliance can result in damage to public perception and also significant financial loss up to $5,000 to $100,000 a month depending on the size of the business and degree of non-compliance [5].
While identifying 6 PCI DSS violations out of 40 applications is not ideal, we narrowed the scope of analyzing PCI DSS compliance from manually validating 17,500 applications to only requiring manual validation of 40 applications. Further, the main source of imprecision was due to the data flow analysis in Amandroid. For example, we found that the context object of the `Activity.findViewById(int,Context)` method was frequently tainted and became a large source of imprecision. Further, the context insensitive analysis of SSL vulnerabilities also contributed to the low precision. Future work can improve the precision of the data flow tracking in static analysis tooling to reduce false alarms. The remainder of this section highlights our findings on the PCI DSS violations that Cardpliance identified within applications and case studies from our analysis.
**Finding 7**: Applications totaling over 1.5 million downloads are not complying with PCI DSS regulations. After verification, we found that 6 applications were non-compliant with the PCI DSS requirements. These violations were distributed across applications from popular merchant applications, toll-paying apps, and communication networks. The impact of these violations even reached vulnerable populations of users, such as the application for ConnectNetwork, which is an application that allows users to call and send messages to family and friends incarcerated within a prison. In total, the download counts of these 6 applications reached around 1.5 million downloads. Therefore, up to 1.5 million users were potentially impacted by the PCI DSS violations that Cardpliance identified and may be at risk for potential fraud. Findings 8-10 discuss each of the PCI DSS violations in depth.
**Finding 8**: Applications are storing credit card numbers without hashing or encrypting the data. Figure 3 shows that Cardpliance identified that 20 applications were persisting credit card numbers in files, shared preferences, and device logs (T1) with 19 of those applications not hashing or encrypting the data (T4). After manual validation, we found 5 out of those 20 (25%) applications were actually persisting credit card numbers and none of them were providing adequate protection of the data as defined by PCI DSS requirements by hashing or encrypting it. While we did not verify whether the location that the data is being saved was accessible to external applications, the fact that data is being saved in plaintext is a security risk. For example, consider the case where a user’s device is compromised by a malicious application that obtains root access to the device. Even if the application stores the data within its private directory that is traditionally protected by UNIX file system privileges, the malicious application can simply read it due to its escalated privileges. Therefore, all credit card numbers should be either hashed or encrypted before storing. If encrypting, the application should also use the Android Keystore to protect access to the cryptographic key.
Although PCI DSS requirements allow storing of credit card numbers, PCI-DSS guideline 3.4.d states that application logs should not contain credit card numbers in plaintext. We found 4 applications writing credit card numbers to logs in plaintext. Examples of applications persisting and logging credit card numbers in plaintext are discussed in Section 5.5.
**Finding 9**: Applications are persisting card verification codes (CVCs). As shown in Figure 3, we validated that 3/8 (37.5%) applications were persisting card verification codes (CVCs) that Cardpliance identified. As discussed in Section 2, PCI DSS mandates that CVCs should never be stored even after authorization. One application called The Toll Roads (`com.seta.tollroadroid.app`) has over 100k+ downloads on Google Play is used to estimate and pay tolls when traveling. This application was flagged by Cardpliance for outputting the payment request along with the CVC to the device logs. Similarly, another application for a franchise restaurant called
<table>
<thead>
<tr>
<th>App Name</th>
<th>Package Name</th>
<th>Downloads</th>
<th>T1</th>
<th>T2</th>
<th>T3</th>
<th>T4</th>
</tr>
</thead>
<tbody>
<tr>
<td>Credit Card Reader</td>
<td>com.ics.creditcardreader</td>
<td>500K+</td>
<td>X</td>
<td></td>
<td></td>
<td>X</td>
</tr>
<tr>
<td>FastToll Illinois</td>
<td>com.pragmatic.fasttoll</td>
<td>10K+</td>
<td>X</td>
<td>X</td>
<td></td>
<td>X</td>
</tr>
<tr>
<td>Bens Soft Pretzels</td>
<td>com.rt7/mobilereward.app.benspretzel</td>
<td>10K+</td>
<td>X</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>The Toll Roads</td>
<td>com.seta.tollroadroid.app</td>
<td>100K+</td>
<td>X</td>
<td>X</td>
<td></td>
<td>X</td>
</tr>
<tr>
<td>ConnectNetwork by GTL</td>
<td>net.gtl.mobile_app</td>
<td>1M+</td>
<td></td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>Peach Pass GO!</td>
<td>com.srta.PeachPass</td>
<td>50K+</td>
<td>X</td>
<td></td>
<td></td>
<td>X</td>
</tr>
</tbody>
</table>
Ben’s Soft Pretzels (com.rt7mobilereward.app.benspretzel) with over 10k+ downloads was also writing the CVC to the device logs. Another toll application called FastToll Illinois (com.pragmistic.fasttoll) is used to pay tolls acquired within Illinois and has over 10k+ downloads. Cardpliance identified that this application was persisting the CVC in the shared preferences of the application.
**Finding 10:** Applications are not masking credit card numbers when displaying them in the user interface. Figure 3 shows that Cardpliance identified that 8 applications were displaying credit card numbers without partial masking. After validation, we verified that that 2 (25%) applications were not partially masking credit card numbers and violating PCI DSS requirements. An application called ConnectNetwork by GTL (net.gtl.mobile_app) has over 1M+ downloads and allows friends and family members to send messages and call people incarcerated within a prison. This application takes the user’s credit card number as input in one UI widget and then displays it in another UI widget for validation without partially masking the credit card number. Section 5.5 discusses the other application in detail. Other than directly violating PCI DSS compliance, all of these applications are putting users at risk of financial loss due to potential shoulder surfing including vulnerable population groups of users such as those using the ConnectNetwork by GTL application.
### 5.5 Case Studies
In this section, we discuss two interesting case studies that demonstrate how applications are potentially mishandling credit card information and thus violating PCI-DSS.
**Case Study 1:** A credit card reader application is mishandling hundreds-of-thousands of customer’s credit card numbers: Credit Card Reader (com.ics.creditcardreader) is a popular Android application from Google Play Store with 500k+ downloads. This application functions similarly to point-of-sale machines and allows the user to accept physical payments from customers. Cardpliance identified that this application was persisting credit card numbers without hashing or encrypting the information. A snippet of the source code for this application is shown in Listing 1. As shown in line 23, the application is obtaining the user’s credit card number from the EditText widget in the user interface and directly logging it to LogCat.
Note that this scenario is substantially worse than other applications logging payment information, as it is exposing credit card numbers of unsuspecting customers. As the application has over 500k+ downloads and merchants may handle a wide range of customers, the amount of customers impacted is ultimately unbounded but likely in the hundreds-of-thousands. As discussed in Finding 8, this practice violates PCI-DSS guideline 3.4.4. Further, logging the credit card number also introduces additional risks of fraud. For example, if an adversary obtains physical access to the device, they can download all of the customers’ credit card numbers in plaintext. In addition, if the user’s device is compromised, a malicious application with escalated privileges could also potentially read all of the customers’ credit card numbers in plaintext. We recommend developers completely avoid writing credit card numbers to logging mechanisms.
**Case Study 2:** An application for placing online orders at a restaurant franchise is persisting credit card numbers in plaintext along with CVVs: A franchise restaurant called Ben’s Soft Pretzels has an application on Google Play (com-.rt7mobilereward.app.benspretzel) with over 10K+ downloads. Based on the developer identifier and website on Google Play, the development of the application appears to have been outsourced to a company called RT7 Incorporated.
The application allows users to place online orders from the restaurant and it accepts credit card payments via the application. Cardpliance identified that this app was persisting credit card numbers without hashing or encrypting, persisting CVVs, and not masking credit card numbers when displaying.
Our validation of the application uncovered several concerning problems. In particular, we found that they were attempting to encrypt the credit card number before storing it to SharedPreferences. However, the key in the key-value pair used to store the encrypted credit card number was the concatenation of a constant string and the user’s credit card number and username. Therefore, the credit card number is still being persisted to disk in plaintext. Further, as shown in Listing 2, they use the bytes from the username and credit card number to seed the random number generator for generating the key. This encryption key is also written to the logs and SharedPreferences as a value under a key that contains both the card number and username. In addition, we found that when the user clicks on the pay button, the credit card number and CVV are both logged. If any of the fields that the user entered are empty when the button is clicked, the remaining payment information is also logged (e.g., expiration date, name, address, and zip code). Moreover, in the CreditCardSaved2Page Activity, the application saves the credit card number in plaintext and CVC code to SharedPreferences as a value under the keys “CardNumTemp” and “CardCvcTemp,” respectively. If the user traverses back to the page, both the credit card number and CVV are fetched from SharedPreferences and repopulated into the text fields. Note that re-displaying credit card numbers without masking is a violation of PCI DSS. In Section 6, we provide recommendations on how developers can securely handle credit card numbers and CVVs and generate and protect encryption keys.
Listing 2: Code snippet of Ben’s Soft Pretzels app insecurely generating and handling encryption keys.
```java
private SecretKeySpec setkey() {
SecretKeySpec v0_1;
try {
SecureRandom v0 = SecureRandom.getInstance("SHA1PRNG");
v0.setSeed(CreditCardEnterPage.userCardNumber().getBytes());
KeyGenerator vl = KeyGenerator.getInstance("AES");
vl.init(128, v0);
v0_1 = new SecretKeySpec(vl.generateKey().getEncoded(), "AES");
} catch (Exception unused_ex) {
Log.e("AES Error", "AES secret key spec error");
v0_1 = null;
}
if (v0_1 != null) {
String v1_1 = Base64.encodeToString(v0_1.getEncoded(), 0);
SharedPreferences.Editor v2 = PreferenceManager.getDefaultSharedPreferences(this).edit();
v2.putString("UserCardNumber", concat(CreditCardEnterPage.userCardNumber(), v1_1));
Log.d("ToChangedStores", v1_1);
v2.apply();
} return v0_1;
}
```
5.6 Disclosure of Findings
Cardpliance identified 15 PCI DSS violations in 6 applications from Google Play, which is listed in Table 3. For each of these applications, we tried to reach out to the developers through their email addresses mentioned in Google Play. All of the emails were successfully delivered to the corresponding email addresses listed on Google Play. In each email, we mentioned the application name, package name, timeline and the PCI DSS violations. For each PCI DSS violation, we reported why it was a violation with reference to the PCI DSS document and the source where the violation occurred.
As of 75 days after disclosure, only one developer responded to our message. A 16.6% response rate is not unexpected considering the fact that responding could raise liability concerns. The responding developer agreed with all but one of the reported vulnerabilities, promising to fix them. We asked for clarification as to why the last issue was not a vulnerability, but did not receive a reply. At the time of camera-ready preparation of this paper, we have not seen an updated version of the application in Google Play.
5.7 Threats to validity
The PCI DSS standard is a human-readable document and does not provide precise requirements. Furthermore, the standard applies to a wide variety of payment technology, and it is not specific to mobile applications collecting credit card information from users. Sections 2 and 4 describe our interpretation of PCI DSS into a precise static analysis task.
False Negatives: Due to the time needed for static program analysis, Cardpliance uses a lightweight filter based on credit card related keywords. Excluding applications during the filtering phase may result in false negatives. While we believe our keyword list is sufficiently comprehensive, it only contains keywords for the English language. Since a keyword search is also used by Test T3 to identify payment UIs, an incomplete keyword list may also result in false negatives for Test T3. Additional false negatives may occur when applications request user input through WebView or use graphical icons to indicate the entry of a credit card number. Cardpliance is also reliant on UiRef [8] to identify taint sources. UiRef does not handle dynamically generated user interfaces.
Static program analysis tools such as Amandroid [19] are neither sound nor complete. While any static analysis can be evaded with sufficient effort, we believe that most legitimate applications have little incentive to violate PCI compliance. We conservatively constructed our rules to mitigate false negatives and created test applications to thoroughly validate the logic for each test. Of note, Cardpliance detected when our test applications sent data over HTTP and sent an unprotected PAN through Android’s SMS API or implicit intent, neither of which were observed in real applications.
Our SSL vulnerability study was limited to poor certificate
validation, which is a common issue for Android applications. While we did not identify any http:// URLs, this may have resulted from limitations in static analysis (e.g., string values not in the code). Our heuristics in Test T4 also did not consider the cryptographic keys or cipher suites when determining if data is safely obfuscated before writing to persistent storage.
In Test T6, we assume explicit intents are used for ICC within an application. This assumption may introduce false negatives if applications use implicit intents to invoke components in external applications. However, doing so would require detailed knowledge of the external application’s APIs, which may change in subsequent versions. Therefore, we expect it will only occur in rare circumstances.
**False Positives:** We used manual validation to eliminate false positives in our reported findings. False positives were observed in several situations. First, UiRef caused two false positives for Test T1 when determining UI input semantics (i.e., email address and card expiry). Second, a significant cause of false positives (particularly in tests T1 and T4) was tainting the context object in the findViewById(context,id) source. This context variable is a singleton for the entire Activity. When this common variable is tainted, the taint propagates to unrelated code where the context object is used, causing false positives. Third, several false positives in Test T5 resulted from the context-insensitive identification of vulnerable SSL libraries that were more generic rather than being specific to payment credentials. Fourth, false positives resulted from Test T3’s lightweight heuristic for masking, because identifying user input from the network is difficult to perform statically. Finally, Test T6 assumes that implicit intents are only used for ICC between applications. Therefore, Test T6 may produce false positives if an application invokes its own components using implicit intents. However, we did not encounter such false positives in our study.
### 6 Recommendations for Developers
PCI DSS v3.2.1 contains 139 pages of requirements, many of which are not relevant to mobile applications. This section seeks to provide a consolidated list of “best practice” recommendations for developers building Android applications that ask the user to enter a credit card number.
1. **Delegate responsibility of payment processing to established third-party payment providers.** Where possible, we recommend developers consider using established third-party payment processors like Stripe, Square, or PayPal. By not requesting and processing payment information, developers can delegate much of the responsibility of PCI DSS compliance to the payment processor.
2. **Do not write the CVC to persistent storage or log files.** PCI DSS explicitly states that Sensitive Account Data (see Table 1) should never be written to storage. This includes the CAV2, CVC2, CVV2, and CID values.
3. **Avoid writing the credit card number to persistent storage or log files.** While PCI DSS does permit writing the credit card number to storage for a short period (if encrypted), it is safer to not write it all. If the user needs to save their card number, developers should consider storing it on a secure server along with the user’s account.
4. **Encrypt credit card numbers with secure randomly generated keys before storing locally.** If the credit card number must be saved locally, it should be encrypted with a key managed by Android’s Keystore. Keys hard-coded in applications are easily discovered. Developers should use randomly-generated keys (e.g., SecureRandom class without a hardcoded seed) and follow PCI DSS recommendations for key length and using established cryptographic libraries like javax.crypto.
5. **Always send payment information over a secure connection when transmitting over the network.** Applications should use HTTPS instead of HTTP when sending payment information over the network.
6. **Never modify the SSLSocketFactory or TrustManager within the application code.** If there is a need to pin the SSL connection to a specific CA, use the net-workSecurityConfig option in the application’s manifest file. If a test server is needed during development, create a custom certificate for the development server and add the custom certificate to test devices. Developers should also vet that included third-party libraries do not include vulnerable implementations that override the default SSL socket factory and hostname verifiers.
7. **Always mask the credit card number before displaying it.** Only the first six and the last four digits may be displayed on subsequent screens.
8. **Only use explicitly-addressed Intent messages when sharing payment information across Android components.** Using implicit Intents addressed with action strings may result in unintentional access by other apps.
### 7 Related Work
Securing payment cards has been an important question leading to seminal papers in computer security [7, 13], yet continues to remain relevant [4, 10, 13, 33, 34]. For example, magnetic stripe cards are easily cloned [4, 7], and only recently have mechanisms to detect this attack been developed [33, 34]. Instead, much of the research has examined EMV chip-based cards, finding and mitigating vulnerabilities related to unauthenticated terminals [13] and pre-play attacks [10]. Payments, however, have moved to mobile devices, making mobile app security an important question for payments. Recent analyses [11, 31] of branchless banking applications
---
https://developer.android.com/training/articles/security-config
found flaws related to misuse of cryptography, flawed authentication, and SSL/TLS misconfiguration. SSL/TLS security is especially important for mobile payments, who primarily rely on HTTP-based APIs. Mobile platforms do this correctly by default, yet developers frequently break certificate validation, creating the possibility for man in the middle attacks [17, 18, 20, 29, 36]. Studies of mobile payment platforms [40] and documentation [12] in China have also demonstrated vulnerabilities in the payment protocols. Further studies on cryptography in Android apps have shown that incorrect use is rampant [14, 25].
Our work also builds on prior work studying information flows in Android apps. Much of this work has built tools to demonstrate undesired leakage of sensitive data [9, 15, 16, 22]. We rely on the extensive body of literature developing static analysis techniques for Android apps [9, 19, 21, 27, 28].
The academic work closest to ours includes UIRef [8], which previously identified credit card collection in Android apps, but provided no further analysis. A second study investigated the PCI DSS compliance of e-commerce websites as well as the effectiveness of PCI scanners for the web [30]. However, our work is the first to investigate the question of payment card handling in the context of mobile apps.
8 Conclusion
Mobile Payment applications improve the standard of trade and commerce. Their ease and flexibility has attracted a wide range of customers and also potential adversaries. Therefore, vetting the security of these applications is paramount to reduce fraud and abuse. We designed and used Cardpliance to study 358 popular Android applications on Google Play that request credit card numbers. While our study demonstrates that most of the 358 applications (98.32%) properly handle payment data according to Cardpliance, some applications still improperly store credit card numbers and card verification codes. The findings from our study demonstrate a positive landscape of PCI DSS compliance in popular Android applications on Google Play.
Acknowledgments
We thank our shepherd, Mary Ellen Zurko and all the anonymous reviewers for their insightful comments. This work is supported in part by NSA Science of Security award H98230-17-D-0080 and NSF SaTC grant CNS-1513690. Any findings and opinions expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies.
Availability
The source code for Cardpliance is publicly available at https://github.com/wspr-ncsu/cardpliance.
References
|
{"Source-Url": "https://www.usenix.org/system/files/sec20-mahmud.pdf", "len_cl100k_base": 16229, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 57055, "total-output-tokens": 19670, "length": "2e13", "weborganizer": {"__label__adult": 0.0008387565612792969, "__label__art_design": 0.0006418228149414062, "__label__crime_law": 0.004245758056640625, "__label__education_jobs": 0.0017642974853515625, "__label__entertainment": 0.00022423267364501953, "__label__fashion_beauty": 0.0004394054412841797, "__label__finance_business": 0.0090789794921875, "__label__food_dining": 0.0004229545593261719, "__label__games": 0.003475189208984375, "__label__hardware": 0.004779815673828125, "__label__health": 0.0008106231689453125, "__label__history": 0.0004341602325439453, "__label__home_hobbies": 0.0001499652862548828, "__label__industrial": 0.0006642341613769531, "__label__literature": 0.00047087669372558594, "__label__politics": 0.0009703636169433594, "__label__religion": 0.0005445480346679688, "__label__science_tech": 0.103271484375, "__label__social_life": 0.00013709068298339844, "__label__software": 0.04620361328125, "__label__software_dev": 0.81884765625, "__label__sports_fitness": 0.0003287792205810547, "__label__transportation": 0.0011034011840820312, "__label__travel": 0.00024259090423583984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 89330, 0.01961]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 89330, 0.23562]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 89330, 0.88934]], "google_gemma-3-12b-it_contains_pii": [[0, 315, false], [315, 4840, null], [4840, 10320, null], [10320, 15490, null], [15490, 20625, null], [20625, 26384, null], [26384, 32711, null], [32711, 39309, null], [39309, 44864, null], [44864, 50133, null], [50133, 56195, null], [56195, 62325, null], [62325, 66120, null], [66120, 71986, null], [71986, 77643, null], [77643, 82012, null], [82012, 86515, null], [86515, 89330, null]], "google_gemma-3-12b-it_is_public_document": [[0, 315, true], [315, 4840, null], [4840, 10320, null], [10320, 15490, null], [15490, 20625, null], [20625, 26384, null], [26384, 32711, null], [32711, 39309, null], [39309, 44864, null], [44864, 50133, null], [50133, 56195, null], [56195, 62325, null], [62325, 66120, null], [66120, 71986, null], [71986, 77643, null], [77643, 82012, null], [82012, 86515, null], [86515, 89330, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 89330, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 89330, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 89330, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 89330, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 89330, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 89330, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 89330, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 89330, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 89330, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 89330, null]], "pdf_page_numbers": [[0, 315, 1], [315, 4840, 2], [4840, 10320, 3], [10320, 15490, 4], [15490, 20625, 5], [20625, 26384, 6], [26384, 32711, 7], [32711, 39309, 8], [39309, 44864, 9], [44864, 50133, 10], [50133, 56195, 11], [56195, 62325, 12], [62325, 66120, 13], [66120, 71986, 14], [71986, 77643, 15], [77643, 82012, 16], [82012, 86515, 17], [86515, 89330, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 89330, 0.10219]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
726a8a8a2d032b78fcc61c1f6ad6e204ac6fdc86
|
Releasing Fast and Slow: An Exploratory Case Study at ING
Elvan Kula
Delft University of Technology
Delft, The Netherlands
e.kula@student.tudelft.nl
Ayushi Rastogi
Delft University of Technology
Delft, The Netherlands
a.rastogi@tudelft.nl
Arie van Deursen
Delft University of Technology
Delft, The Netherlands
arie.vandeursen@tudelft.nl
Hennie Huijgens
ING
Amsterdam, The Netherlands
hennie.huijgens@ing.com
Georgios Gousios
Delft University of Technology
Delft, The Netherlands
g.gousios@tudelft.nl
ABSTRACT
The appeal of delivering new features faster has led many software projects to adopt rapid releases. However, it is not well understood what the effects of this practice are. This paper presents an exploratory case study of rapid releases at ING, a large banking company that develops software solutions in-house, to characterize rapid releases. Since 2011, ING has shifted to a rapid release model. This switch has resulted in a mixed environment of 611 teams releasing relatively fast and slow. We followed a mixed-methods approach in which we conducted a survey with 461 participants and corroborated their perceptions with 2 years of code quality data and 1 year of release delay data. Our research shows that: rapid releases are more commonly delayed than their non-rapid counterparts, however, rapid releases have shorter delays; rapid releases can be beneficial in terms of reviewing and user-perceived quality; rapidly released software tends to have a higher code churn, a higher test coverage and a lower average complexity; challenges in rapid releases are related to managing dependencies and certain code aspects, e.g. code debt.
CCS CONCEPTS
• Software and its engineering → Software development process management;
KEYWORDS
rapid release, release delay, software quality, technical debt
1 INTRODUCTION
In today’s competitive business world, software companies must deliver new features and bug fixes fast to maintain sustained user involvement [1]. The appeal of delivering new features more quickly has led many software projects to change their development processes towards rapid release models [2]. Instead of working for months or years on a major new release, companies adopt rapid releases, i.e., releases that are produced in relatively short release cycles that last a few days or weeks. The concept of rapid releases (RRs) [3] is a prevalent industrial practice that is changing how organizations develop and deliver software. Modern applications like Google Chrome [4], Spotify [5] and the Facebook app operate with a short release cycle of 2-6 weeks, while web-based software like Netflix and the Facebook website push new updates 2-3 times a day [6].
Previous work on RRs has analyzed the benefits and challenges of adopting shorter release cycles. RRs are claimed to offer a reduced time-to-market and faster user feedback [3]; releases become easier to plan due to their smaller scope [7]; end users benefit because they have faster access to functionality improvements and security updates [2]. Despite these benefits, previous research in the context of open source software (OSS) projects shows that RRs can negatively affect certain aspects of software quality. RRs often come at the expense of reduced software reliability [3, 8], accumulated technical debt [9] and increased time pressure [10].
As RRs are increasingly being adopted in open-source and commercial software [2], it is vital to understand their effects on the quality of released software. It is also important to examine how RRs relate to timing aspects in order to understand the cases in which they are appropriate. Therefore, the overall goal of our research is to explore the timing and quality characteristics of rapid release cycles in an industrial setting. By exploring RRs in industry, we can obtain valuable insights in what the urgent problems in the field are, and what data and techniques are needed to address them. It can also lead to a better understanding and generalization of release practices. Such knowledge can provide researchers with promising research directions that can help the industry today.
We performed an exploratory case study of rapid releases at ING, a large Netherlands-based internationally operating bank that develops software solutions in-house. We identified 433 teams out of 611 software development teams at ING as rapid teams, i.e., teams that release more often than others (release cycle time \( \leq 3 \) weeks). The remaining 178 teams with a release cycle > 3 weeks are termed...
as non-rapid teams, i.e., teams that release less often than others. The large scale and mixed environment of ING allowed us to make a comparison between RRs and non-rapid releases (NRs) to explore how release cycle lengths relate to time and quality aspects of releases. We followed a mixed-methods approach in which we conducted a survey with 461 software engineers and corroborated their perceptions with 2 years of code quality data and 1 year of release delay data. To the best of our knowledge, this is the first exploratory study in the field of RRs. It is also the first study to analyze RRs at a scale of over 600 teams, contrasting them with NRs.
Developer answers to our survey indicate mixed perceptions of the effect of RRs on code quality. On one hand, RRs are perceived to simplify code reviewing and to improve the developers’ focus on user-perceived quality. On the other hand, developers reported the risk of making poor implementation choices due to deadline pressure and a short-term focus. Our data analysis supports the views on code quality improvements as indicated in a higher test coverage, a lower number of coding issues and a lower average complexity. Regarding release delays, our data analysis corroborates the belief of developers that RRs are more often delayed than NRs. However, RRs are correlated with a lower number of delay days per release than NRs. A prominent factor that is perceived to cause delay is related to dependencies, including infrastructural ones.
2 CONTEXT
ING is a large multinational financial organization with about 54,000 employees and over 37 million customers in more than 40 countries [11]. In 2011, ING decided to shorten their development cycles when they planned to introduce Mijn ING, a personalized mobile application for online banking. Before 2011, teams worked with release cycles of 2 to 3 months between major version releases. However, ING wanted to cut the cycles down to less than a month to stay ahead of competition. In 2011, the bank introduced DevOps teams to get developers and operators to collaborate in a more streamlined manner. Currently, ING has 611 globally distributed DevOps teams that work on various internal and external applications written in Java, JavaScript, Python, C and C#.
2.1 Time-based Release Strategy
All teams at ING work with a time-based release strategy, in which releases are planned for a specific date. In general, the teams deliver releases at regular week intervals. However, the release time interval differs across teams and occasionally within a team. The latter can appear in case of a release delay.
Defining RRs versus NRs. Although ING envisioned to cut release cycles down to less than a month, not all teams have been able to make this shift, possibly due to their application’s nature or customers’ high security requirements. The development context at ING is therefore mixed, consisting of teams that release at different speeds. Figure 2 presents an overview of the teams’ release frequencies in the period from June 01, 2017 to June 01, 2018. The distribution shown is not fixed as teams intend to keep reducing their release cycle times in the future.
For this study we divide the teams at ING in a rapid group and non-rapid group based on how fast they release relative to each other. This distinction allows for a statistical comparison between the two groups to explore if a shorter release cycle length influences time and quality aspects of releases. We acknowledge that, within a group, there might be differences among teams with different release cycle lengths.
Classification threshold. As all teams at ING are expected to follow the rapid release model, there is no specific culture of RRs versus NRs that enabled us to make a differentiation between the two (e.g., letting teams self-identify). We decided to use the median release cycle time (3 weeks) as a point of reference since the distribution shown in Figure 2 contains outliers. Using the median as a point of reference, we classified teams as either rapid (release duration of ≤ 3 weeks) or non-rapid (release duration > 3 weeks).
This way we identified 433 rapid teams (71%) and 178 non-rapid teams (29%) at ING. In the same manner, we classified releases as either rapid (time interval between release date and start of development phase ≤ 3 weeks) or non-rapid otherwise. In general, rapid teams push rapid releases. However, if a delay causes a cycle length to exceed 3 weeks, a rapid team can push a non-rapid release.
Demographics of teams. Teams selected for analysis are similar in size (both rapid and non-rapid: 95% CI of 5 to 9 members) and number of respondents (both rapid and non-rapid: 95% CI of 1 to 2 respondents). Teams are also similar in their distribution of experience in software development (both rapid and non-rapid: 95% CI of 10 to 20 years). The projects are similar in size and domain.1
---
1 A replication package containing survey questions and demographics data is publicly available at https://figshare.com/s/4b961b849e1720c6e6f.
2.2 DevOps and Continuous Delivery
To make shorter release cycles practical, ING introduced the Continuous Delivery as a Service (CDaaS) project in 2015 to automate the complete software delivery process. ING put a continuous delivery pipeline in place for all teams to enforce an agile development process, and reduce their testing and deployment effort. Figure 1 depicts the pipeline. Jenkins, the CI server, is responsible for monitoring the quality of the source code with the static analysis tool SonarQube.² Vassallo et al. [12] performed a case study at ING about the adoption of the delivery pipeline during development activities.
3 RESEARCH METHOD
The goal of this study is to explore the timing and quality characteristics of rapid release cycles in an industrial setting.
Timing characteristics. Because RRs are driven by the idea of reducing release cycle time, timing aspects are intrinsic to rapid release cycles. By examining how often and why rapid releases are delayed, we can deepen our understanding of the effectiveness of rapid releases and better determine in which cases they are appropriate to use. Teams at ING work with a time-based release strategy, in which releases are planned for a specific date. The only time teams deviate from their fixed cycle length is in case of a release delay. Here we want to find out how often projects deviate from their regular cycle length and why. This leads to our first two research questions:
RQ1: How often do rapid and non-rapid teams release software on time?
RQ2: What factors are perceived to cause delay in rapid releases?
Quality characteristics. It is important to examine the quality characteristics of rapid releases to get an insight into the way shorter release cycles affect the internal code quality and user-perceived quality of software in organizations. By exploring the quality characteristics of RRs, we may better understand their long-term consequences and inform the design of tools to help developers manage them. We only focus on internal code quality as we do not have access to data on external (user-perceived) quality at ING. We define our last research question as follows:
RQ3: How do rapid release cycles affect code quality?
3.1 Study Design
We conducted an exploratory case study [13] of rapid releases at ING with two units of analysis: the group of rapid teams and the group of non-rapid teams. Our case study combines characteristics from interpretive and positivist type case studies [13]. From an interpretive perspective, our study explores RRs through participants’ interpretation of the development context at ING. From a positivist perspective, we draw inferences from a sample of participants to a stated population.
Data triangulation. To get a better understanding of RRs, we addressed the research questions applying data triangulation [14]. We combined qualitative survey data with quantitative data on release delays and code quality to present detailed insights. This is also known as a mixed-methods approach [15, 16]. Since we wanted to learn from a large number of software engineers and diverse projects, we collected qualitative data using an online survey in two phases [17]. In the first phase, we ran a pilot study with two rapid and two non-rapid teams at ING. This allowed us to refine the survey questions. In the second phase, we sent the final survey to all teams at ING Netherlands (ING NL). In addition, we analyzed quantitative data stored in ServiceNow and SonarQube to examine the timing and quality characteristics of rapid releases, respectively.³ We compared the perceptions of developers with release delay data and code quality data for rapid and non-rapid teams. An overview of our study set-up is shown in Figure 3. For RQ2, we only analyzed survey data because quantitative (proxy) data on release delay factors is not being collected by ING.
The quantitative data on release delays and code quality was aggregated at the release level (unless stated otherwise), while developers were asked to reflect on team performance in the survey.
To be consistent in aggregation, we used the same rapid/non-rapid classification threshold of 3 weeks for both teams and releases.
3.2 Collecting Developers’ Perceptions
We sent the survey to members of both rapid teams and non-rapid teams. To ensure that we collected data from a large number of diverse projects, we selected the members of all teams at ING NL as our population of candidate participants. In total, we contacted 1803 participants in more than 350 teams, each working on their own application that is internal or external to ING. The participants have been contacted through projects’ mailing lists.
Survey design. The survey was organized into five sections for research related questions, plus a section aimed at gathering demographic information of the respondents (i.e., role within the team, total years of work experience and total years at ING). The five sections were composed of open-ended questions, intermixed with multiple choice or Likert scale questions. To address RQ1, we asked respondents to fill in a compulsory multiple choice question on how often their team releases on time. To get deeper insights into how rapid versus non-rapid teams deal with release delays, we included an open-ended question asking respondents what their teams do when a release is delayed. For RQ2, we provided respondents with an open-ended question to gather unbounded and detailed responses on delay factors. For RQ3, we provided respondents with a set of two compulsory open-ended questions, asking respondents how they perceive the impact of rapid release cycles on their project’s internal code quality, and whether they think that rapid release cycles result in accumulated technical debt. We added a set of four optional 4-level Likert scale questions (each addressing the
³https://www.servicenow.com/
Figure 3: Overview of our mixed-methods study set-up
impact of RRs on testing debt, design debt, documentation debt and coding debt). In addition, we included a mandatory multiple choice question about the respondent team’s release frequency and a few optional questions on how they perform quality monitoring.
Survey operation. The survey was uploaded onto Collector, a survey management platform internal to ING NL. The candidate participants were invited using an invitation mail featuring the purpose of the survey and how its results can enable us to gain new knowledge of rapid releases. For the pilot run, we randomly selected two rapid and two non-rapid teams. We e-mailed the 24 employees in the four teams and received 7 responses (29% response rate). For the final survey, we e-mailed 1803 team members and obtained 461 responses (26% response rate). Respondents had a total of three weeks to participate in the survey. We sent two reminders to those who did not participate yet at the beginning of the second and third weeks. The survey ran from June 19 to July 10, 2018.
Demographics of respondents. Out of the 461 responses we received, 296 respondents were from rapid teams (64%) and the remaining 165 respondents were from non-rapid teams (36%). A majority (70%) of our respondents self-identified as software engineer, while the rest identified themselves as managers (6%), analysts (23%) or other (1%) role at the IT department of ING. Most participants (77%) reported to have more than ten years of software development experience and more than five years of experience at ING (59%). For RQ3, we filtered out 259 respondents who did not identify as a software engineer in a rapid team.
Survey analysis. We applied manual coding [18] to summarize the results of the four open-ended questions during two integration rounds. We coded by statement and codes continued to emerge till the end of the process. In the first round, the first and the last author used an online spreadsheet to code a 10% sample (40 mutually exclusive responses) each. They assigned at least one and up to three codes to each response. Next, they met in person to integrate the obtained codes, meaning that the codes were combined by merging similar ones, and generalizing or specializing the codes if needed. When new codes emerged, they were integrated in the set of codes. The first author then applied the integrated codes to 90% of the answers and the second author did this for the remaining 10% of the responses. In the second round, the first two authors had another integration meeting which resulted into the final set of codes. The final set contained 18% more codes than the set resulting from the first integration round.
3.3 Collecting Software Metrics
To analyze the quality of software written by non-rapid teams in comparison with rapid teams, we extracted the SonarQube measurements of releases that were shipped by teams that actively use SonarQube as part of the CDaaS pipeline. Although all teams at ING have access to SonarQube, 190 of them run the tool each time they ship a new release. We analyzed the releases shipped by these 190 teams in the period from July 01, 2016 to July 01, 2018. In total, we studied the major releases of 3048 software projects. 67% of these releases were developed following a rapid release cycle (≤ 3 weeks) with a median value of 2 weeks between the major releases. The remaining 33% of the releases were developed following a non-rapid release cycle (> 3 weeks) with a median value of 6 weeks between the major releases.
Processing software metrics. First, we checked the releases in SonarQube, and extracted the start dates of their development phase and their release dates. Then, we classified the releases as non-rapid or rapid based on the time interval between their release date and start date of the development phase (using 3 weeks as threshold). We did not consider the time period between the release dates of two consecutive releases since the development of a release can start before the release of the prior one. Although SonarQube offers a wide range of metrics, we only considered the subset of metrics that are analyzed by all teams at ING. For each release, we extracted the metrics that all teams at ING measure to assess their coding performance. Out of these metrics, code churn is used to assess the level of coding activity within a code base, and the remaining metrics are seen as indicators for coding quality:
(1) Coding Standard Violations: the number of times the source code violates a coding rule. A large number of open issues can indicate low-quality code and coding debt in the system [19]. As part of this class of metrics, we looked more specifically into the Cyclomatic Complexity [20] of all files contained in a release.
(2) Branch Coverage: the average coverage by tests of branches in all files contained in a release. A low branch coverage can indicate testing debt in the system [21].
(3) Comment Density: the percentage of comment lines in the source code. A low comment density can be representative of documentation debt in the system [21].
(4) Code Churn: the number of changed lines of code between two consecutive releases. Since in RRs code is released in smaller batches, it is expected that the absolute code churn is lower in rapid teams but it is not clear how the normalized code churn is influenced.
As SonarQube does not account for differences in project size, we normalized the metrics by dividing them by Source Lines of Code (SLOC): the total number of lines of source code contained in a release. Since code churn is calculated over the time between releases and this differs among teams, we normalized code churn by dividing it by the time interval between the release date and start date of the development phase. The code complexity and lines of code were used to examine if differences observed in the software quality are potentially caused by changes in the source code’s size or complexity. Finally, we performed a statistical comparison of the metrics between the group of RRs and the group of NRs.
3.4 Collecting Release Delay Data
To compare the occurrence and duration of delays in releases of rapid and non-rapid teams, we extracted log data from ServiceNow, a backlog management tool used by most teams at ING NL. We received access to log data of 102 teams for releases shipped between October 01, 2017 and October 01, 2018. First, we checked the releases of each team in the system, and extracted their planned release dates and actual release dates. The releases were classified as either rapid or non-rapid based on the time interval between their planned release date and that of the previous release. We acknowledge that the development of a release might start before the planned release date of the previous release. This should not affect our distinction between releases as they are classified based
which corresponds to a large effect.
We aggregated the delay measurements to perform a statistical analysis. According to the Mann-Whitney U test, the differences observed in the actual percentage of timely releases for rapid and non-rapid teams are statistically significant at a confidence level of 95% (p-value < 0.001). We measured an effect size (Cliff’s delta) of 0.833, which corresponds to a large effect.
Figure 4 shows that a majority of both rapid and non-rapid teams release software more often on time than our respondents believe. We could not find any rapid or non-rapid team that releases software on time only 0 - 25% of the time. The data corroborates the perception of respondents that rapid teams are always more delayed than their non-rapid counterparts. One extreme is that 19% of rapid teams are on time 75% to 100% of the time, while the percentage increases to 32% for non-rapid teams.
Delay duration. Although rapid teams are more often delayed than non-rapid teams, analysis of the delays at release level shows that delays in RRs take a median time of 6 days, while taking 15 days (median) in NRs. According to the Mann-Whitney U test, this difference is statistically significant at a confidence level of 95% with a large effect size of 0.695.
Application domains. Further analysis of the data showed a similar trend as observed in the survey responses. A majority of the rapid teams that are less than 50% of the time on track develop mobile applications (47%), average delay duration: 7 days) and APIs (12%, average delay duration: 5 days). Using the Mann-Whitney U test, we did not find any significant difference in project domains for the rapid teams that are on track more than 75% of the time.
How do teams address release delays? In the responses to the open-ended question on what teams do when a release gets delayed, we distinguished two main approaches that both rapid and non-rapid teams undertake. Teams report to address delays through rescheduling, i.e., the action of postponing a release to a new date, and re-planning or re-prioritizing the scope of the delivery. Both groups also report to have the option to release as soon as possible, i.e., in the time span of a few days. Rapid teams mentioned both approaches equally often, while a majority (76%) of non-rapid teams report to reschedule. This suggests that rapid teams are more flexible regarding release delays.
RQ2: What factors are perceived to cause delay in rapid releases?
For this research question, we only analyzed survey responses, because quantitative data on release delay factors is not being collected by ING. Survey respondents mentioned several factors that they think introduce delays in releases. A list of these factors arranged in decreasing order of their occurrence in responses of rapid teams is shown in Figure 5.
Figure 5 shows that dependencies and infrastructure (which can be seen as a specific type of dependency) are the most prominent factors that are perceived to cause delay in rapid teams. Other factors which were listed in at least 10% of responses are testing (in general and for security), following mandatory procedures (such as for quality assurance) prior to every release, fixing bugs, and scheduling the release, including planning effort and resources. Non-rapid teams experience similar issues. Similar to rapid teams, non-rapid teams report to be largely influenced by dependencies. The other factors which were considered important by at least 10% of the respondents are scheduling, procedure, and security testing.
Further analysis of the most prominent factor perceived to delay rapid and non-rapid teams (dependency) explained the sources of dependency in the organization. Developers, in their open-ended responses, attributed two types of dependencies to cause delay in their releases. At a technical level, developers have to deal with cross-project dependencies. Teams at ING work with project-specific repositories and share codebases across teams within one application. At a workflow level, developers mention to be hindered by task dependencies. Inconsistent schedules and unaligned priorities are perceived to cause delays in dependent teams. Many developers seem to struggle with estimating the impact of both types of dependencies in the release planning.
Another factor which is perceived to prominently affect rapid and non-rapid teams is security testing. For rapid teams, developers report that security tests are almost always delayed because of an unstable acceptance environment or missing release notes. They further add that any software release needs to pass the required security penetration test and secure code review, which are centrally performed by the CIO Security department at ING. Respondents report that they often have to delay releases because of “delayed penetration tests” [r66], “unavailability of security teams” [r133] and “acting upon their findings” [r86].
Rapid teams also report delays related to infrastructure and testing (in general). These factors do not feature in the top mentioned factors influencing non-rapid teams. Regarding infrastructure, respondents mention that issues in infrastructure are related to the failure of tools responsible for automation (such as Jenkins and Nolio) and sluggishness in the pipeline caused by network or proxy issues. Respondent [r168] states that “Without the autonomy and tools to fix itself, we have to report these issues to the teams of CDaas and wait for them to be solved”. Regarding testing, developers mention that the unavailability or instability of the test environment induces delay in releasing software. Respondent [r11] states that “In that case we want to be sure it was the environment and not the code we wish to release. Postponing is then a viable option”.
Further analysis of the survey responses showed that the rapidly released mobile applications and APIs that are least often on time (found in RQ1) are hindered by dependencies and testing. Many mobile app developers report to experience delay due to dependencies on a variety of mobile technologies and limited testing support for mobile-specific test scenarios. API developers report to be delayed by dependencies in back-end services and expensive integration testing.
RQ3: How do rapid release cycles affect code quality?
For this research question, we considered 202 survey responses from developers in rapid teams. We removed 165 non-rapid respondents next to 94 rapid respondents who did not identify as a developer at ING.
A. Developers’ Perceptions
Developers have mixed opinions on how RRs affect code quality. A distribution of the effect of RRs (improve, degrade, no effect) on different factors related to code as perceived by developers is shown in Figure 6. It shows responses suggesting improvements in quality in green, degradation in quality in red and no effect in grey.
Quality improvement. A majority of developers perceive that the small changes in RRs make the code easier to review, positively impacting the refactoring effort (e.g., “It gets easier to review the code and address technical debt” [r16]). Developers also report that the small deliverables simplify the process of integrating and merging code changes, and they lower the impact of errors in development. A few developers mention that RRs motivate them to write modular and understandable code.
A large number of developers mention the benefits of rapid feedback in RRs. Feedback from issue trackers and the end user allows teams to continuously refactor and improve their code quality based on unforeseen errors and incidents in production. Rapid user feedback is perceived to lead to a greater focus of developers on customer value and software reliability (e.g., “[RRs] give more insight in bugs and issues after releasing. [They] enable us to respond more quickly to user requirements” [r232], “We can better monitor
the feedback of the customers which increased [with RRs]” [r130]). This enables teams to deliver customer value at a faster and more steady pace (e.g., “[With RRs] we can provide more value more often to end users.” [r65], “Features are delivered at a more steady pace” [r16]).
Quality degradation. Many developers report to experience an increased deadline pressure in RRs, which can negatively affect the code quality. Developers explain to feel more pressure in shorter releases as these are often viewed as “a push for more features” [143]. They believe that this leads to a lack of focus on quality and an increase in workarounds (e.g., “Readiness of a feature becomes more important than consistent code.” [r26]). A few developers report to make poor implementation choices under pressure (e.g., “In the hurry of a short release it is easy to make mistakes and less optimal choices” [r320]).
Technical debt. We checked whether the respondents monitor technical debt in their releases through a multiple choice question in the survey. 168 out of 202 developers of rapid teams reported to monitor the debt in their project. For our analysis, we only considered responses from these developers and we focused on four common types of debt as identified in the work of Li et al. [19]: coding debt, design debt, testing debt and documentation debt. An overview of the responses to the Likert scale questions is shown in Figure 7. According to a majority of the developers, RRs do not result in accumulated debt of any type. Since the developers’ explanations on coding debt were similar to the factors mentioned in Figure 6, we will now focus on other types of debt:
Design debt. Many developers report that the short term focus of RRs makes it easier to lose sight of the big picture, possibly resulting in design debt in the long-run. Especially in case of cross-product collaboration, RRs do not leave enough time to discuss design issues (e.g., “We are nine teams working together on the same application. Due to time constraints design is often not discussed between the teams.” [r147])
Testing debt. A majority of developers mention RRs to have both positive and negative effects on their testing effort. RRs are perceived to result in a more continuous testing process since teams update their test suite in every sprint. However, due to their shorter time span, developers report to focus their testing effort in RRs on new features and high-risk features. This focus is found to “allow more complete testing of new features” [r184] and to “make it easier to determine what needs to be tested” [r186]. Developers mention to spend less time on creating regression tests in RRs.
Documentation debt. A majority of the developers do not perceive a decrease in the amount of documentation in RRs. However, developers report that the short-term focus in RRs reduces the quality of documentation. When there is pressure to quickly deliver new functionality, documentation quality receives the least priority (e.g., “The need for high quality documentation is low in the short-term” [155], “Documentation is the first which will be dropped in case of time pressure” [r84]). Developers mention to cut corners by using self-documenting code, and by skipping high-level documentation regarding the global functionality and integration of software components.
Developers perceive the deadline pressure in rapid releases to reduce code quality. The short-term focus in rapid releases may result in design debt in the long-run.
B. Software Quality Measurements
To gain a more detailed insight into the code quality of teams, we performed a comparative analysis of SonarQube measurements for RRs and NRs. To account for differences in project size, we normalized all metrics by SLOC. The Shapiro-Wilk test [23] shows that the data is not normally distributed. Therefore, we use the non-parametric statistical Mann-Whitney U test [22] to check whether the differences observed between NRs and RRs are statistically significant. To adjust for multiple comparisons we use the Bonferroni-corrected significance level [24] by dividing the conventional significance level of 0.05 by 5 (the total number of tests), giving a corrected significance level of 0.01. This means that the p-value needs to be < 0.01 to reject the null hypothesis. The effect size is measured as Cliff’s delta [25]. The results of our analysis are summarized in Table 1, and presented below:
Figure 6: Developer perception of the impact of rapid releases on code quality aspects
Figure 7: The impact of rapid release cycles on different types of technical debt
We now discuss our main findings and consider implications for well with RRs. Initial work in this direction has been carried out by Kerzazi & Khomh [26] and Bellomo et al. [27].
Different project types, are a delay factor in RRs. A study of project properties that delay RRs could help the field determine when RRs are appropriate to use. Which project types and organizations fit delayed than other application domains. These findings suggest that project type, or perhaps certain inherent characteristics in different project types, are a delay factor in RRs. A study of project properties that delay RRs could help the field determine when RRs are appropriate to use. Which project types and organizations fit well with RRs? Initial work in this direction has been carried out by Kerzazi & Khomh [26] and Bellomo et al. [27].
Total release delay and customer value. Our results show that RRs are more commonly delayed than NRs. However, it is not clear what this means for the project as a whole, given that NRs are correlated with a longer delay duration. How do RRs impact the total delay of a project? Future work should examine the number of delay days over the duration of a project. How do release delays evolve throughout different phases of a project? This also raises the question whether RRs (despite those delays) help companies to deliver more customer value in a timely manner. Our respondents report that RRs enable them to deliver customer value at a faster and more steady pace, which suggests that over time rapid teams could deliver more customer value than non-rapid teams. Future research in this direction could give us a better insight into the effectiveness of RRs in terms of customer experience.
The balance game: security versus rapid delivery. Many of our respondents perceive security testing to induce delay in rapid teams. This suggests that organizations make a trade-off between rapid delivery and security. A financial organization like ING, with security critical systems, may choose to release less frequently to increase time available for security testing. As one of the respondents puts it “It is a balance game between agility and security. Within ING the scale balances heavily in favor of security, thereby effectively killing off agility.” [r21] Further analysis is needed to explore the tension between rapid deployment of new software features and the need for sufficient security testing. To what extent does security testing affect the lead time of software releases? In this context, Clark et al. [28] studied security vulnerabilities in Firefox and showed that RRs do not result in higher vulnerability rates. Further research in this direction is needed to clear up the interaction between both factors.
Nevertheless, the time span of a release cycle limits the amount of security testing that can be performed. Therefore, further research also focus on agile security to work towards the automation of security testing, and the design of security measures that are able to adapt to the changes in a rapid development environment. New methods for rapid security verification and vulnerability identification could help organizations to keep pace with RRs.
Dependency management. We found that the timing of both RRs and NRs is perceived to be influenced by dependencies in the ecosystem of the organization. Our respondents report the difficulty of assessing the impact of dependencies. There is a need for more insight into the characteristics and evolution of these dependencies. How can we quantify their combined effect on the overall ecosystem? This problem calls for further research into streamlining dependency management. Decan et al. [29, 30] studied how dependency networks tend to grow over time, both in size and package updates, and to what extent ecosystems suffer from...
issues related to package dependency updates. Hedjedrup et al. [31] proposed the construction of a fine-grained dependency network extended with call graph information. This would enable developers to perform change impact analysis at the ecosystem level and on a version basis.
**Code quality.** We found that RRs can be beneficial in terms of code reviewing and user-perceived quality, even in large organizations working with hundreds of software development teams. This complements the findings reported by [2, 32, 33] on the ease of quality monitoring in RRs. Our quantitative data analysis shows that software built with RRs tends to have a higher branch coverage, a lower number of coding issues and a lower average complexity. Previous research [34] reported improvements of test coverage at unit-test level but did not look into other code quality metrics. It is an interesting opportunity for future work to analyze how RRs impact code quality metrics in other types of organizations and projects.
Challenges related to code aspects in RRs concern design debt and the risk of poor implementation choices due to deadline pressure and a short-term focus. This is in line with previous work [9, 10] that showed that pressure to deliver features for an approaching release date can introduce code smells. A study of factors that cause deadline pressure in rapid teams would be beneficial. It may be that projects that are under-resourced or under more time pressure are more likely to adopt RRs, instead of RRs leading to more time pressure. The next step would be to identify practices and methods that reduce the negative impact of the short-term focus and pressure in RRs.
### 5.2 Implications for Practitioners
Here we present a set of areas that call for further attention from organizations that work with rapid releases.
**Managing code quality.** In our study, we observed that a minority of rapid teams claim not to experience the negative consequences of RRs on code quality. We noticed that most self-reported ‘good’ teams are doing regular code reviews and dedicate 25% of the time per sprint on refactoring. Teams that experience the downsides of RRs mention to spend less than 15% of the time on refactoring or to use ‘clean-up’ cycles (i.e., cycles dedicated to refactoring). Although further analysis is required, we recommend organizations to integrate regular (peer) code reviews in their teams’ workflows and to apply continuous refactoring for at least 15% of the time per sprint.
**Release planning.** Regarding delays, our respondents express the need for more insight on improving software effort estimation and streamlining dependencies. Although software effort estimation is well studied in research (even in RRs: [35, 36]), issues relating to effort estimation continue to exist in industry. This calls for a better promotion of research efforts on release planning and predictable software delivery. Organizations should invest more in workshops and training courses on release planning for their engineers. We also recommend organizations to apply recent approaches, such as automated testing and *Infrastructure as Code*, to the problem of delays in RRs.
**Releasing fast and responsibly.** Developers report to feel more deadline pressure in RRs, which can result in poor implementation choices and an increase in workarounds. This is also reported by previous work [2, 10]. Organizations should not view RRs as a push for features. A sole focus on functionality will harm their code quality and potentially slow down releases in the long run. We believe that it is less effective to motivate organizations to slow down and produce better code quality than helping developers to release fast while breaking less. Future work should attempt to enhance the ways that software development teams communicate, coordinate and assess coding performance to enable organizations to release fast while maintaining high code quality.
### 5.3 Future Work
Our work reveals several future research directions.
**Fine-grained analysis of release cycle length.** An interesting opportunity for future work is to explore whether our findings still hold for more fine-grained groupings of weekly release intervals. Another promising opportunity is to explore possible confounding factors. Although we explored the role of several factors that are likely to affect release cycle time (see Section 2.1), further analysis is required to explore confounding factors. Participant observations suggest that customers’ high security requirements might play a role. Future work could examine these factors and eliminate them through statistical controls (e.g., through a form of multiple regression).
**Long-term impact of RRs.** An interesting opportunity for future work is to explore the long-term effect of RRs. By analyzing longitudinal data of quality measurements before and after teams switched to RRs, it can be measured how metrics relate to release cycle length over time. What is the long-term effect of RRs on code quality and user-perceived quality of releases? How do issues related to design debt and time pressure develop in the long run?
**Feedback-driven development.** Our results show that the rapid feedback in RRs is perceived to improve the focus of developers on the quality of software. Feedback obtained from end users, code reviews and static analysis can be used to guide teams to focus on the most valuable features, and to enable automated techniques to support various development tasks, including log monitoring, and various forms of testing. Such techniques can be used to further reduce the cycle length. An exploration of these opportunities would help organizations to improve the quality of their software. An extension of the data with runtime information (i.e., performance engineering) and live user feedback that is integrated into the integrated development environment could be beneficial.
### 6 LIMITATIONS
**Internal validity.** One factor that can affect the qualitative analysis is the bias induced by the involvement of the authors with the studied organization. The first author interned at ING at the time of this study while the third author works at ING. To counter the biases which might have been introduced by the first and third authors, the last author (from Delft University of Technology) helped in designing survey questions. The observations and interpretation of the findings were cross-validated by the other two authors. Another risk of the coding process is the loss of accuracy of the original response due to an increased level of categorization. To mitigate this risk, we allowed multiple codes to be assigned to the same answer.
In our survey design we phrased and ordered the questions in a sequential order of activities to avoid leading questions and order effects. Social desirability bias [37] may have influenced the responses. To mitigate this risk, we made the survey anonymous and let the participants know that the responses would be evaluated statistically.
We cannot account for confounding variables that might have affected our findings. Even though all teams at ING are encouraged to release faster, not all teams have been able to reduce their release cycle length to 3 weeks or less. This suggests that there are confounding factors that differentiate rapid and non-rapid teams. Examples of potential factors are project difficulty and security requirements. It is also a possibility that rapid teams at ING work on software components that are more easy to release rapidly. This might have led to too optimistic results for rapid teams.
**External validity.** As our study only considers one organization, external threats are concerned with our ability to generalize our results. Although the results are obtained from a large, global organization and we control for variations using a large number of participants and projects spanning a time period of two years, we cannot generalize our conclusions to other organizations. Replication of this work in other organizations is required to reach more general conclusions. We believe that further in-depth explorations (e.g., interviews) and multiple case studies are required before establishing a general theory of RRs. Our findings are likely applicable to organizations that are similar to ING in scale and security level. We cannot account for the impact of the large scale of ING on our results. Further research is required to explore how the scale of organizations and projects relates to the findings. Our findings indicate a trade-off between rapid delivery and software security. In a financial organization like ING there is no tolerance for failure in some of their business-critical systems. This may have influenced our results, making our findings likely applicable to organizations with similar business- or safety-critical systems. Replication of this study in organizations of different scale, type and security level is therefore required.
7 RELATED WORK
Early studies on RRs focused on the motivations behind their adoption. Begel and Nagappan [38] found that the main motivations relate to easier planning, more rapid feedback and a greater focus on software quality. Our study and others [2, 32–34, 39, 40] found similar benefits. We also found that RRs are perceived to enable a faster and more steady delivery of customer value.
Recent efforts have examined the impact of switching from NRs to RRs on the time and quality aspects of the development process:
**Time aspects.** Costa et al. [8] found that issues are fixed faster in RRs but, surprisingly, RRs take a median of 54% longer to deliver fixed issues. This may be because NRs prioritize the integration of backlog issues, while RRs prioritize issues that were addressed during the current cycle [41]. Kerzazi and Khomh [26] studied the factors impacting the lead time of software releases and found that testing is the most time consuming activity along with socio-technical coordination. Our study complements prior work by exploring how often rapid teams release software on time and what the perceived causes of delay are. In line with [26], we found that testing is one of the top mentioned delay factors in RRs.
The strict release dates in RRs are claimed to increase the time pressure under which developers work. Rubin and Rinard [10] found that most developers in high-tech companies work under significant pressure to deliver new functionality quickly. Our study corroborates the finding that developers experience increased deadline pressure in RRs.
**Quality aspects.** Tufano et al. [9] found that deadline pressure for an approaching release date is one of the main causes for code smell introduction. Industrial case studies of Codabux and Williams [42], and Torkar et al. [43], showed that a rapid development speed is perceived to increase technical debt. We found that the deadline pressure in RRs is perceived to result in a lack of focus and an increase in workarounds. Our study complements prior work by analyzing the impact of RRs on certain code quality metrics and different types of debt.
In the OSS context, multiple studies [2, 32, 33, 39] have shown that RRs ease the monitoring of quality and motivate developers to deliver quality software. Khomh et al. [3, 44] found that less bugs are fixed in RRs, proportionally. Mäntylä et al. [2] showed that in RRs testing has a narrower scope that enables a deeper investigation of features and regressions with the highest risk. This was also found by our study and others [34, 39]. [34, 45] showed that testers of RRs lack time to perform time-intensive performance tests. In line with previous work, our respondents reported to spend less time on creating regression tests. Our study complements aforementioned studies by comparing the branch coverage of RRs to that of NRs.
Overall, studies that focus on RRs as main study target are explanatory and largely conducted in the context of OSS projects. In this paper, we present new knowledge by performing an exploratory case study of RRs in a large software-driven organization.
8 CONCLUSIONS
The goal of our paper is to deepen our understanding of the practices, effectiveness, and challenges surrounding rapid software releases in industry. To that end, we conducted an industrial case study, addressing timing and quality characteristics of rapid releases. Our contributions include the reusable setup of our study (Section 3) and the results of our study (Section 4).
The key findings of this study are: (1) Rapid teams are more often delayed than their non-rapid counterparts. However, rapid releases are correlated with a lower number of delay days per release. (2) Dependencies, especially in infrastructure, and testing are the top mentioned delay factors in rapid releases. (3) Rapid releases are perceived to make it easier to review code and to strengthen the developers’ focus on user-perceived quality. The code quality data shows that rapid releases are correlated with a higher test coverage, a lower average complexity and a lower number of coding standard violations. (4) Developers perceive rapid releases to negatively impact implementation choices and design due to deadline pressure and a short-term focus.
Based on our findings we identified challenging areas calling for further attention, related to the applicability of rapid releases, the role of security concerns, the opportunities for rapid feedback, and management of code quality and dependencies. Progress in these areas is crucial in order to better realize the benefits of rapid releases in large software-driven organizations.
REFERENCES
IEEE, 2016.
IEEE, 2016.
IEEE, 2016.
IEEE, 2016.
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/54964259/Releasing_Fast_and_Slow.pdf", "len_cl100k_base": 10631, "olmocr-version": "0.1.48", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 37549, "total-output-tokens": 12797, "length": "2e13", "weborganizer": {"__label__adult": 0.0003566741943359375, "__label__art_design": 0.00028324127197265625, "__label__crime_law": 0.00028252601623535156, "__label__education_jobs": 0.0011873245239257812, "__label__entertainment": 4.976987838745117e-05, "__label__fashion_beauty": 0.0001386404037475586, "__label__finance_business": 0.0003781318664550781, "__label__food_dining": 0.00028228759765625, "__label__games": 0.0005064010620117188, "__label__hardware": 0.00042510032653808594, "__label__health": 0.0002913475036621094, "__label__history": 0.00015687942504882812, "__label__home_hobbies": 6.157159805297852e-05, "__label__industrial": 0.00023376941680908203, "__label__literature": 0.00018846988677978516, "__label__politics": 0.00022482872009277344, "__label__religion": 0.00029969215393066406, "__label__science_tech": 0.002620697021484375, "__label__social_life": 8.52346420288086e-05, "__label__software": 0.00446319580078125, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.0002237558364868164, "__label__transportation": 0.0003371238708496094, "__label__travel": 0.00015854835510253906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57204, 0.03298]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57204, 0.06294]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57204, 0.94705]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4552, false], [4552, 9624, null], [9624, 15595, null], [15595, 22484, null], [22484, 25333, null], [25333, 30427, null], [30427, 35059, null], [35059, 38905, null], [38905, 45629, null], [45629, 52601, null], [52601, 57204, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4552, true], [4552, 9624, null], [9624, 15595, null], [15595, 22484, null], [22484, 25333, null], [25333, 30427, null], [30427, 35059, null], [35059, 38905, null], [38905, 45629, null], [45629, 52601, null], [52601, 57204, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57204, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57204, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57204, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57204, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57204, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57204, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57204, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57204, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57204, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57204, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4552, 2], [4552, 9624, 3], [9624, 15595, 4], [15595, 22484, 5], [22484, 25333, 6], [25333, 30427, 7], [30427, 35059, 8], [35059, 38905, 9], [38905, 45629, 10], [45629, 52601, 11], [52601, 57204, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57204, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
acaa215d0f8411f00e76bc86ae5c1eb5aa36e51a
|
Semantic-based Construction of Content and Structure XML Index
Norah Saleh Alghamdi1,2 Wenny Rahayu3 Eric Pardede4
Department of Computer Science and Computer Engineering,
La Trobe University, Australia
Email: 1 nsalgamdi@students.latrobe.edu.au 2 n.alghamdi@tu.edu.sa 3 w.rahayu@latrobe.edu.au 4 e.pardede@latrobe.edu.au
Abstract
Content And Structure (CAS) index for XML data is an important index type that has not been widely researched, even though its role is important especially in multi domain applications. Most existing researches in XML Queries Optimization focus on structure index alone. Few have utilized the rich semantic of XML data to support CAS index and querying. In this paper, we propose two indexes namely Structural index and Content index, whose construction utilizes XML data semantics and schema. These indexes contribute to a better CAS queries performance. The experiments prove that our method improves the performance of CAS queries by reducing the cost of CPU time and the total number of scanned elements compared to a standard method.
Keywords: Twig, Path, Index, Semantics, Objects, Value predicates, Content constraints, Structural predicates
1 Introduction
An example of a target-based query against Purchase Order XML data, in Fig. 2, is Q1 = “find all customers who sent item by shipment to Melbourne, Victoria”. This query is also called CAS since it requires a combination of content and structural constraints for its processing. A particular customer will be retrieved using structural constraints of the query and only a specific part of the customer’s information will be finally retrieved utilizing the content constraints, which is also known as a value predicate. Without having value predicate, the query will list all customers information and generate a report which is out of the paper scope and target. Consequently, it is not practical to query without value predicates for transactional queries over large collection of data where each collection goes up to million or more of kilobytes. Therefore, an efficient method to index content or value is necessary to improve the performance of XML queries with value predicates.
To address queries with value predicates, content constraint alone is not enough. The knowledge of the whole path query is also required since the value predicate will not be standing alone without the existence of the whole path query.
Consider query Q1 above in XPath format “/purchaseOrder/ShipTo [city= ‘Melbourne’ and state= ‘VIC’]/name/Fname”. The significant characteristics are embedded within the query: (i) the underlying semantic structure between a set of interconnected nodes, (ii) the path connectivity between each of the nodes and (iii) the content constraints of the predicates. Therefore, it is important to take advantage from these query characteristics in optimizing the query performance. Because the nature of XML data and schema structure are rich in semantics, identifying and leveraging the semantic connectivity from the data and schema in process such query are beneficial in improving the query performance.
A naive query processing by scanning the entire XML data in top-down or bottom-up fashion will cause significant performance degradation in most cases (Li et al. 2001). Indexing schemes have been developed in recent years to overcome this issue. However, most index schemes are capable at a certain phase of processing a query such as processing a simple path query without branches (Goldman et al. 1997) whereas others support a limited set of queries such as supporting only the structural queries without considering the value predicates (Haw et al. 2009, Liang et al. 2006). To the best of our knowledge, very few works have considered CAS queries in their indexing schemes. However, they do not utilize the semantics of XML data because they identified the XML nodes by global IDs which do not carry any semantic meaning (Li et al. 2001, Rizzolo et al. 2001, Zou et al. 2004, Monjurul et al. 2009, Chen et al. 2007). Therefore, exploiting the semantics of XML data to build an index scheme for CAS query processing is an ideal solution which has not been proposed in the past literatures yet. In this paper, we propose new indices which exploit the semantics of XML data and schema in their construction for efficient CAS query processing.
Our goal of this paper to achieve the following contributions:
- Pruning the search space: based on the semantic knowledge gained from XML Schema.
• **Building CAS index**: improving the query performance by loading merely the relevant portion of data during a query processing due to taking an advantage of the semantic nature of XML data in designing the value index.
• **Optimization**: producing the final results without the need to traverse the document leading to I/O cost safe.
The organization of this paper is as follows. Firstly in section 2, related works are discussed. In section 3, the preliminary knowledge for XML schema, data and query model is described. Section 4 describes our proposed index. Section 5 is the evaluation. Finally in section 6, the conclusion and future work are presented.
2 Related Work
Several indexing schemes have been proposed in order to improve XML path queries performance. However, despite the past efforts, the focus was in utilizing an index to assist in processing the structural part in twig queries effectively. However, these methods do not distinguish between the structural and content search. Leaf nodes with values and internal nodes without values in XML data have different characteristics, thus, processing the content in the same way as processing the structure will lead to expensive structural joins to search for contents. Add to this shortcoming, the semantic information of XML data has been ignored in the most previous studies for either value or non-value nodes. Therefore, this causes scanning unrelated portion of data that related to the query semantically.
Adjustable indices (Chung et al. 2002, Liang et al. 2006) group data nodes based on local similarity. However, since they keep track of the forward and backward paths, their size tends to be huge. Haw and Lee Haw et al. (2009) label each node of an XML document and then join them. However, their joining process becomes aggressive especially when the query has only ancestor-descendant relationships. ViST (Wang et al. 2003) and LCS-Trim (Tatikonda et al. 2007) transform both XML data and queries into sequences and then evaluate the queries based on sequence matching. The drawback of these methods is the occurrence of sequence matching.
XISS (Li et al. 2001) indexes XML data based on a pre-order numbering scheme to fastly determine ancestor-descendant relationship between elements. It consists of five components namely, element, attribute, structure, name indices and value table. XISS collects all distinct name strings in the name index implemented as B+-tree with “nid” as a name identifier. “nid” is used as a key for element index. The drawback of XISS is its nodes joining process, which can produce large intermediate results and in some cases may lead to query performance delay.
ToXin (Rizzolo et al. 2001) also collects values of XML data in a value index beside summarizing all forward and backward paths of XML graph in a path index. The value index contains of values and its corresponding paths. The path index consists of the index tree corresponding to Dataguide (Goldman et al. 1997) and an instance function for each edge of the tree index. ToXin navigates down, navigates up and filters an XML query to produce a set of nodes that match a set of query nodes and relationships over value predicates. An obvious shortcoming is the lack of support for range predicates since all values are treated as string. Since this approach uses Dataguide and uses edge approach to joining paths, it does not keep the hierarchy information to answer a complex twig queries.
Unlike ToXin, CTree (Zou et al. 2004) does not provide only path summaries at XML document level but also at the group level. It also provides details of child-parent links at element-level. In addition, CTree has multiple value indices per each data type of XML data including (List, Number, DTime). All the value indices support a search (value, gid, put parameter) operation where gid indicates a certain group of CTree. By determining gid, irrelevant groups are eliminated in order to evaluate value predicates. Therefore, I/O cost is low. CTree can handle XML documents having only regular groups more efficiently. However, in the case of an XML document containing lots of irregular groups, the index space will rapidly rise, due to the need to element-level links for each element.
RootPath and DataPath index (Chen et al. 2007) can evaluate XML twig queries with value predicates to be tightly integrated with a relational database query processor. In RootPath index, the prefixes of the root-to-leaf paths are indexed. It is a concatenation of leaf value and the reverse of schema path, and it returns the complete node ID List. In contrast, DataPath index stores all sub-paths of root-to-leaf paths. In fact, the DataPath index is bigger than RootPath, due to the duplication of the schema paths and the node ID of its structure. The increase size of the index tends to rise accordingly with the increase of XML documents size. To overcome this shortcoming, (Chen et al. 2007) explored lossless and lossy compression techniques for reducing the index sizes.
To evaluate twig queries, TwigTable stores values in semantic-based relational tables whereas the internal structure of XML document is stored in inverted lists (Ling et al. 2011). In this approach, Structural join algorithm is used to maintain the inverted list while relational database processor maintains the tables. Semantic-based design of the tables brings performance advantages of TwigTable. On the other hand, the limitation of this approach is when a query does not have value predicates, no semantics will be applied and merely a structural join algorithm is performed.
3 Preliminary Knowledge
Schema and Data model: Practically, both XML schema and data are modelled as large, ordered node-labelled tree (T,N,E) where each node n ∈ N corresponds to an XML element and each edge between the nodes (n_i,n_j) ∈ E is used to identify the containment of node n_i under n_j in T. However, each leaf node ln_i of XML data contains a value denoted by value(ln_i).
In Fig. 1, there is a purchase order schema which describes a purchase order generated by home products ordering and billing application (W3C 2004). In Fig. 2, there is the XML data tree corresponding to the schema in the Fig. 1. Each of them has elements connected by edges but the data has values in each leaf node.
Query Model: The focus of our work is on CAS XML query either a simple path or with branches. An XML query Q consists of nodes, labelled edges, and query predicate Q(N_q,E_q,P_q) where each node q ∈ N_q represents a query tag that adheres to a set of an XML document’s elements. A labelled
edge between two nodes \((q_i, q_j) \in E_Q\) indicates a structural constraint, which involves operators "/" and "/" denoting “PC” parent-child relationship and “AD” ancestor-descendant relationship respectively. A query predicate is held between brackets “\[\]” in the query \(Q\) including other structural constraints and a filter of content constraints. The filter of content constraints evaluates true based on the corresponding XML document nodes. A list of \(N\)-ary tuples is generated to produce a final result of matching XML document nodes. A list of \(N\)-ary tuples is generated to produce a final result of matching XML document nodes. A list of \(N\)-ary tuples is generated to produce a final result of matching XML document nodes. A list of \(N\)-ary tuples is generated to produce a final result of matching XML document nodes. A list of \(N\)-ary tuples is generated to produce a final result of matching XML document nodes. A list of \(N\)-ary tuples is generated to produce a final result of matching XML document nodes.
**Query Predicate:** A query predicate \(P_Q\) is a combination of all or some of \((N_Q, E_Q)\)\(\{N_Q, E_Q, V_PQ\}\) where each \(q_i \in N_Q\) is a query tag within the predicate brackets, each \(e_i \in E_Q\) is an edge between two query tags, and each \(v_i \in V_PQ\) is a value that can match a value of leaf node \(l_i\) at the data model and \(P_Q\) is another predicate representing a branching point.
Consider a CAS query \(Q2 = "//.shipTo/state='VIC'/name"\) where \(shipTo\), state, name, ", /, //" are structural constraints and ‘VIC’ is a content constraint. ‘\[\{state='VIC'\}\]’ in \(Q2\) is called a predicate which is a content constraint in this query and can be a combination of content and structural constraints.
**4 Object-based Content and Structure XML Indexing**
This section pays attention to utilizing the semantics of the structure and the content of XML data and schema during the index construction phase. Structural index is introduced first to maintain the structural constraints of XML queries. Thereafter, we represent the Content index which is proposed to improve the performance of querying constant values within XML data. Our methodology takes into account exploiting the semantic nature of XML data to improve the query performance. In order to achieve this goal, we adopt object-based XML data partitioning technique, called OXDP (Alghamdi et al. 2011), as pre-processing phase ahead of constructing the in-
In particular, OXDP is semantic rich rules that can discover useful semantic information and identify objects within an XML schema. In this paper, we utilize such rules in determining XML document’s objects and then partitioning the data based on the discovered objects. (refer to (Alghamdi et al. 2011) for more details). Indeed, the semantics term is used as everyday terminology by researchers across different concepts and application fields. However, to be precise, XML Semantics basically is envisioned in our work as an XML feature that enables us to identify XML data based on the meaning of their tags beside relationships between the tags. Such identification facilitates grouping and partitioning relevant data in order to provide semantically structured data.
**Definition 1. (Object)** An object of an XML document is defined as a complex element type of XML schema associated with that document. In other words, an object is a non-leaf element that consists of simple or other complex elements.
**Definition 2. (Object-based Partition (Opart))** An Object-based Partition is a partition of XML schema and XML data that consists of a single object or multiple or nested objects.
In Fig. 1 & Fig. 2, shipTo and its descendants are considered Opart1 in our example and billTo and its descendants are considered Opart2.
Afterwards, tokenising all distinct elements of XML schema as well as tokenising all distinct value inside XML data is taking the first place of constructing our indices. In a schema or a data, attributes and their associated values are treated as simple elements with values.
**Definition 3. (Element Token (eT))** An Element token is an identifier that encodes each distinct element’s tag name of XML schema.
**Definition 4. (Value Token (vT))** A Value token is an identifier that indicates each distinct leaf element’s value of XML data.
In Fig. 1, each element of the schema is tokenised. For instance, the schema elements: “purchaseOrder”, “shipTo”, “state” and “postcode” have “eT0, eT1, eT2” as their tokens respectively. Element Tokens represent all elements in the schema or the data. However, Value Tokens are created only from the data with values. In Fig. 2, the values “Waterdale road”, “3081” and “VIC” are tokenised to “vT3, vT4 and vT5” respectively. Both eT and vT are implemented as integers to eliminate the computational overhead caused by the string comparisons but we use the symbolic eT and vT for a clear demonstration. The following subsections show the Structural indexing and the Content indexing.
### 4.1 Structural Indexing
Structural indices composing of Schema index and Data index are proposed mainly to cope with the structural part of XML queries in efficient way. These indices can evaluate arbitrary query structures including “/” or “//” as well as branching queries. The components of each index will be presented as follows:
**Definition 5. (Schema index)** Schema index consists of element Tokens of the schema associated with a set of pairs consisting of Path of schema Object and ID of the Object-based partitions. Schema index can be represented as “(eT, (p, Opart))” where “eT” is the element token, “p” is Path of Schema Object and “Opart” is the partition ID where the element exists. The definitions of eT, p, and Opart are Definition. 3, 8 and 2 respectively.
**Definition 6. (Data index)** Data index consists of two indices. In the first index (Data index1), each Path of Schema Object is associated with all its corresponding Path of Data Object. It can be represented as a pair as “((p, dp))” following the Definitions 8 and 10. The second index (Data index2) consists of all Data Objects “Dp” grouped by each object partitions defined in 9.
**Definition 7. (Schema Object (So))** A Schema Object is a set of tokens including an element token of a parent element tag, which is a complex element of XML schema, alongside the element tokens of its children tags. Consider $S_o$ as the set of element tokens $S_o$, $eT_{parent}$, $eT_{child(1)}$, …, $eT_{child(k)}$, where $eT_{parent}$ is the element token of the parent node, $eT_{child(i)}$ is the element token of the parent associated children within the schema and $k$ is the number of the parent’s children.
**Definition 8. (Path of Schema Object (p))** Path of Schema Object is a set of $S_o$ located on the same path from the root to a leaf node of the XML schema.
In Fig. 1, Schema Objects and Path of Schema Object have been added to the schema diagram. For instance, $S_o2$ is an Schema Object which its tokens is “eT1 eT2 eT5 eT6 eT7 eT8” of the elements “shipTo”, “name”, “street”, “postcode”, “city” and “state” respectively. It is important to highlight that in the Schema Object, we represent the object as a parent and its direct children tags and not its descendants. For instance, “name” has “Fname” and “Lname” as its children which have not been included in $S_o2$. In the same figure, the Path of Schema “p1” is a set of the Schema Objects $S_o1$, $S_o2$, $S_o3$ located in the same path from the root to the leaf.
**Definition 9. (Data Object (Do))** A Data Object is a pair of a set of element tokens associated with a set of those element positions inside XML data. Consider $D_o$ ($eT_{parent}$, $eT_{child(1)}$, …, $eT_{child(k)}$, $Pos_{parent}$, $Pos_{child(1)}$, …, $Pos_{child(k)}$ ) where $eT_{parent}$ is the element token of the parent node, $eT_{child(i)}$ is the element token of the parent associated children and $k$ is the number of the parent’s children within XML data. The positions of XML data are generated during a depth-first traversal of the tree and sequentially assigning a number at each visit.
**Definition 10. (Path of Data Object (dp))** A Path of Data Objects is a set of $D_o$ located on the same path from the root to a leaf node of the XML data.
In Fig. 2, Data Objects and Path of Data Objects have been add to the XML data tree. For instance, $D_o2$ is an Data Object which its tokens is “eT1 eT2 eT5 eT6 eT7 eT8” with their position “1 2 5 6 7 8”. In the same figure, “p1” consisting of “$S_o1$, $S_o2$, $S_o3$” in Schema index is corresponding to two Paths of Data Objects “$dp1$” and “$dp3$” containing “Do1 Do2 Do3” and “$D_o1 D_o4 D_o5$” as a set of Data Objects respectively.
### 4.2 Content Indexing
To index the content of XML data, Value index is proposed. Firstly, the values are tokenised and stored in the data. The positions of XML data are generated during a visit of the XML document. In Fig. 1 & Fig. 2, "dp1" and "dp3" are the Path of Data Objects. After tokenising all distinct elements of XML schema and XML data, we use the symbolic "vT0, vT1, vT2" as their tokens respectively. Value Tokens represent all elements in the schema or the data. However, Value Tokens are created only from the data with values. In Fig. 2, the values "Waterdale road", "3081" and "VIC" are tokenised to "vT3, vT4 and vT5" respectively.
Proceedings of the Twenty-Fourth Australasian Database Conference (ADC 2013), Adelaide, Australia
Figure 3: Value Index
The redundancy of the data within an XML document increases the index size in most of the previous works. In our index, this issue has been taken into consideration as shown by Remark 1.
Remark 1. There is a single value token for all matched values located in the same object partition, same Path of Schema Object and having the same element tokens.
From Remark 1, we can see that the memory can be saved thus, the searching time will be reduced as well. Considering this remark will be more practical for the data that often has a redundancy. For instance, in a small scale, let say the purchaseOrder XML data has 10 shippers living in the same shipTo address, it is more precise to ignore the redundant data of the address and record the address only once. In the large scale, this redundancy will affect the performance of a query processed over the index negatively. It is important to highlight that Data Objects associated with each value token will assign the position of that value's parent node within the data as the Definition 9. This feature will improve the efficiency of processing the query by trimming the search space in the Data index later.
4.3 Discussion
With both Schema index and Data index, either single-path queries and branching queries can be answered. For instance, consider Q3 = "purchaseOrder/shipTo/ [street][/state]", the elements tokens of the query are "eT0", "eT1", "eT5" and "eT8". From the Schema index, "eT0" is associated with (p1, Opart1), (p2, Opart1), (p3, Opart2), (p4, Opart2), (p5, Opart3), eT1 is associated with (p1, Opart1), (p2, Opart1) and eT5 and eT8 are associated with (p2, Opart1). To trim the search space, we do an intersection based on Opart between "eT0" and "eT1", then between the intersected result and each child of "eT1" separately i.e. "eT5" and "eT8". The final result will be (p2, Opart1). From the Data index, "dp2" and "dp4" are retrieved based on the "p2" of the Schema index. Since "dp2" and "dp4" consist of a set of Data objects, the position of the query elements will be retrieved from the Data Objects, which are located in Opart1, by matching the element tokens of the query with the element tokens of the Data objects. The final results of the query would be "0, 1, 5, 8" and "0, 9, 13, 16".
The Value index in conjunction with Structural indices can be used to evaluate arbitrary queries with different predicates such as a value predicate, single path ended with value predicates or a branched path ended with value predicate. The functionality of Value index will be discussed using this part "shipTo/ [street]= 'Waterdale Rd' " of the query Q3. Since the Value index groups values of an XML document within objects. By apply this semantic-based technique, the search will be trimmed semantically based on each objects. In the given query, the Schema index of "street" is eT8 associated with (p2, Opart1). This means that only Opart1 is required. In addition, the index adds the Path of Schema Objects as identifiers that assist in decreasing the search space.
Our index stores all the value tokens with their corresponding Data Objects according to their data context.
The Value index is built from the Schema index, Data index and XML data. It keeps the semantic connectivity between the nodes of XML data to produce an efficient performance for a query with value predicates. It consists of all the object partition identifiers associated with a set of Paths of Schema Object. For each path, there are all the element tokens of only the leaf nodes. The value tokens of the corresponding element token are associated with their Data Objects.
Definition 11. (Value index) Consider a Value index as \( VI = Opart_1, \ldots, Opart_t \), where \( Opart_i \) is an object partition identifier and \( k \) is the number of object partitions. Consider for each object partition identifier of \( VI \) as \( Opart_i = \text{Leaf}(p_1), \ldots, \text{Leaf}(p_m) \) where each \( \text{Leaf}(p_j) \) is a Path of Schema Object exists in \( Opart_i \) and \( m \) is the number of Path of Schema Object. Let \( \text{Leaf}(p_j) = eT_1, \ldots, eT_n \) is a Path of Schema Object consisting of a set of tokens of leaf elements in the XML data and \( n \) is the total number of element tokens. For each \( eT_i \), there is a set of value token associated with its corresponding Data Objects \( eT_i = \langle vT_1, \{D_{o1}, \ldots, D_{on}\} \rangle, \ldots, \langle vT_y, \{D_{o1}, \ldots, D_{on}\} \rangle \) where \( y \) is the number of value tokens and \( n \) is the number of Data Objects per each value tokens.
Figure 3 depicts the Value index in only “Opart1”, the rest of the partitions will have similar representation. “Opart1” has “p1” and “p2” as its Path of Schema Object where each of them has the tokens of the leaf elements “eT3, eT4” and “eT5, eT6, eT7, eT8” respectively. It can be seen that “eT2” in “p2” was not considered since it is an element token of non-leaf node. The last level of this index is the association of the value tokens and its associated Data Objects \( \langle vT1, D_{o3} \rangle \) and \( \langle vT7, D_{o5} \rangle \) depicted in the same figure.
of the values. In our example, the search space will be trimmed to those that have “p2” within the partition “qNode.1”. Another advantage is that instead of preceding an aggressive string search looking for matching values to the query condition, element tokens will eliminate irrelevant values. Thus, the condition ‘Waterdale Rd’ will be mapped to its value token and then the token will be looked at from eTS without the need to scan all the value tokens within the path “p2”. The output of this index is the value token, which is “vT3” in the query, leading us to the right Data Object which is D2. The interest of finding the Data Objects, i.e.”D12”, is that only related Data Objects will visit in the Data index to produce the final results.
The design of the proposed indices has three features to facilitate the evaluation of twig queries in an optimum execution time. The indices are able to: (i) preserve the details of parent-child elements through the objects, (ii) preserve the details of all objects located in each path of the schema and data as in Path of Schema Object and Path of Data Object, and (ii) partition and keep links between interconnected data based on object based semantics as stated in Definition 2.
4.4 Algorithms
Our algorithm “ProcessQuery” is a recursive function building decomposing a branched path into multiple single paths. It applies the intersection based on the objects from the Schema index of all query nodes “qNode” located on the same query path. This process will end up with intersected Schema index iSI among all the query nodes within the path to help us to use the information of the Path of Schema Objects “p” and the object “OPart” where the search will be done on.
Then, it evaluates the path ending with a value predicate using the function “EvaluateContent” and the path without a value predicate is evaluated by “EvaluateStructure”. Both EvaluateContent and EvaluateStructure are used in finding the XML node positions within the data. They are similar in their independent evaluation of each other, i.e. the whole path can be processed completely by only one of them without the need to the other. The only difference between them is that EvaluateContent utilizes the Value index for finding the candidate Data Objects which hold the condition of the value predicate before the proceeding the structural search. Thus, we can say that EvaluateStructure can do only the structural search whereas EvaluateContent can do both structural and content search. The main advantage of EvaluateContent is its capability to trim the search space of scanned elements. The details of EvaluateContent will be shown later. After evaluating the content and structure of the query and getting the XML nodes positions, the result will merged through MergeResult which keep the structural order of the node and produce fine results. The function EvaluateContent can do two main functionalities. The first is the content search which starts from line 1 and then embeds the structural search from line 8. The content search uses Value index to retrieve only participated Data Objects by filtering the value predicate using the information coming from the Schema index as “p” and “OPart”. After that, the structural search will be done on those XML nodes that exist within the participated Data Objects. At line 8, it goes through the related portion of the first Data index that matches “p” to check the last Dn of the input query.
```
Input: qNode, c_SI “current Schema index”, path, depth
Output: Query nodes position within the data
1 if ¬qNode.Children then
2 path.Add(qNode);
3 i_SI ← Intersect(c_SI,SchemaIndex[qNode]);
4 foreach p of i_SI do
5 if qNode.ValuePredicate then
6 xNums ← EvaluateContent(p,path,Opart);
7 else
8 EvaluateStructure(p,path,Opart);
9 end
10 end
11 return res;
12 end
13 i_SI ← Intersect(c_SI, SchematicIndex[qNode]);
14 firstOccurrence ← true;
15 path.Add(qNode);
16 foreach c in qNode.Children do
17 temp ← ProcessQuery(c, i_SI, path, depth+1); if firstOccurrence then
18 result ← temp; firstOccurrence ← false;
19 else
20 result ← MergeResult(result, temp, depth);
21 end
22 end
23 return result;
```
Algorithm 1: ProcessQuery
5 Experiments
5.1 Environment set up
In order to study the improvement in CAS query processing by our proposed indices, a series of experiments was carried out. Our Value index performance is compared with the standard value index with the focus on the value predicates. The structural part of the query in both methods is the same. We use our Structure index including Schema index and Data index to process the structural part of the query and the content part processing is done by both our Value index and the standard value index.
A prototype system was implemented using C#Net. All XML indices in this paper were loaded into RAM before running the queries, thus IO cost of reading the index data are not required. All the experiments were conducted on an Intel Core 3.2 GHz PC with 6.00 G RAM running Windows 7.
Algorithm 2: EvaluateContent
Input:
Output: XML nodes positions
```java
foreach viObject in ValueIndex[Opart] do
viPath = viObject[p];
viToken = viPath[quP[quP.Count-1].Token];
if !viToken[quP[quP.Count-1].Token] then continue;
q_e = quP[quP.Count-1].vPred;
for e_T ∈ viToken[q_e]./ do
d = viToken[q_e][i];
PathsOfDo = DataIndex1[p];
foreach dp in PathsOfDo do
if dp[dp.Count-1] != d then continue;
TmpR = new List (); k = 0;
for D_o ∈ dp do
(find matched element tokens continue from line 20 to 47);
end
if (R_c.Count = quP.Count) Result.Add(R_c);
end
end
end
object=DataIndex2[viObject];
foreach d_o ∈ object[Do] do
foreach e_T ∈ d_o.Tokens & k < quP.Count do
R_t.Add(d_o.Pos[e_T]);
if k > 0 then
if quP[k],Tag[0] != '/' then
y_e = nDepth[R_t[R_t.Count-1]]; y_e = nDepth[R_t[R_t.Count-2]];
if y_e != y_e + 1 then
R_t.Remove(R_t.Count-1);
continue;
end
end
k++;
if k = quP.Count & quP[k-1].leaf then
if R_t.Count = quP.Count then
R_t.RemoveAt(R_t.Count-1);
k--;
end
if k = quP.Count || quP[k].leaf then
break;
end
end
end
end
```
5.2 Standard Value Indexing
In most past research, the standard indexing method for values when value predicates exist in the CAS queries is to index each value with its node position id. When performing joins, a small amount of node ids will be returned for further joins. We compare our proposed value index with the standard value index.
5.3 Experiment Datasets
DBLP, Auction Data, and SigmodRecords were used in our experiments and were obtained from XML repository of University of Washington University of Washington XML Repository (2002). Different characteristics of each dataset is shown in Table 1.
5.4 Experiment Metrics
To evaluate the performance of our proposed algorithms, two metrics were used. The first metric is obtaining CPU cost by calculating the average execution times of a query. Secondly, the total number of scanned elements is measured during a joining process. This metric will provide a good reflection about the ability of our algorithms to trim search space and to skip portion of the data.
5.5 Experiment Criteria
Since the focus of this paper is the performance of CAS queries, the evaluation considers the use of both structural and content index on the queries. To examine the structural part, we vary the type of relationships Parent-Child (PC) or hybrid of PC and Ancestor-Descendant (AD), and the number of branches. To evaluate the content part, we consider value predicates composing of numeric or string. Added to these criteria, we included simple paths and branch paths with values in the predicates.
5.6 Experiment Queries
Table 2 presents the evaluation queries. Each query is coded “QXX”, where ‘X’ represents ‘S’ (SigmodRecords), ‘D’ (DBLP), or ‘A’ (Auction Data), and ‘N’ is the query number within the respective dataset. The queries were selected to cover most combinations of the evaluation criteria, thus, the sensitivity of the query performance can be indicated to each criteria. QD9 and QD11 are simple path queries whereas the rest are branch queries. QD6 and QS5 contain only PC relationship, while the rest contain hybrid edges of PC and AD. We have a variety of branches number in the branched queries. For example, QS1 and QS2 have two branches whilst QD3, QD4, and QD5 have 3, 4, 5 branches respectively. The type of value predicates is also different among the queries. QA9 and QS5 have path-value predicates while QA10, QS4, and QS5 have path-branch-value predicates and the rest are merely value predicates.
Table 2: Experiment of Queries
<table>
<thead>
<tr>
<th>QXN</th>
<th>Query pattern</th>
</tr>
</thead>
<tbody>
<tr>
<td>QA1</td>
<td>//auction_info/current_bid=" $620.00"/time_left</td>
</tr>
<tr>
<td>QA2</td>
<td>/root/auction_info/location=" LOS ANGELES, CA"/high_bidder/bidder_name</td>
</tr>
<tr>
<td>QA3</td>
<td>/root/listing/location=" LOS ANGELES, CA"/auction_info/time_left</td>
</tr>
<tr>
<td>QA4</td>
<td>//auction_info/current_bid=" $620.00"/num_items="1"/time_left</td>
</tr>
<tr>
<td>QA5</td>
<td>//auction_info/current_bid=" $610.00"/num_items="1"/started_at=" $100.00"/time_left</td>
</tr>
<tr>
<td>QA6</td>
<td>//auction_info/current_bid=" $610.00"/num_items="1"/started_at=" $100.00"</td>
</tr>
<tr>
<td>QA7</td>
<td>//auction_info/current_bid=" $610.00"/num_items="1"/location="LOS ANGELES, CA"/high_bidder/bidder_name</td>
</tr>
<tr>
<td>QA8</td>
<td>//listing/seller_info/seller_name=" cubsfantony"/auction_info/current_bid</td>
</tr>
<tr>
<td>QA9</td>
<td>//listing/auction_info/current_bid=" $620.00"/num_items="1"/time_left</td>
</tr>
<tr>
<td>QS1</td>
<td>/issue/volume="11"/number</td>
</tr>
<tr>
<td>QS2</td>
<td>/article/title="Architecture of Future Data Base Systems."/authors</td>
</tr>
<tr>
<td>QS3</td>
<td>/SigmodRecord/issue/volume="11"/number="1"/articles/title</td>
</tr>
<tr>
<td>QS4</td>
<td>/SigmodRecord/issue/article/initPage="30"/endPage="44"/title</td>
</tr>
<tr>
<td>QS5</td>
<td>/SigmodRecord/issue/articles/article/endPage="44"/initPage="30"/volume</td>
</tr>
<tr>
<td>QS6</td>
<td>/article/editor="Frank Manola"/title</td>
</tr>
<tr>
<td>QS7</td>
<td>/article/editor="Paul R. McJones"/title</td>
</tr>
<tr>
<td>QS8</td>
<td>/article/editor="Paul R. McJones"/volume="SRC1997-018"/title</td>
</tr>
<tr>
<td>QS9</td>
<td>/article/editor="Paul R. McJones"/journal="Digital System Research Center Report"/year</td>
</tr>
<tr>
<td>QS10</td>
<td>/digital/article/author="Tor Helleseth"/year</td>
</tr>
<tr>
<td>QS11</td>
<td>/digital/inproceedings/author="Tor Helleseth"/title/sub</td>
</tr>
<tr>
<td>QS12</td>
<td>/digital/inproceedings/title/"i=pertC"/sub</td>
</tr>
<tr>
<td>QS13</td>
<td>/digital/inproceedings/title/"sub=INF"/sub</td>
</tr>
<tr>
<td>QS14</td>
<td>/digital/inproceedings/"i=INF"/sub</td>
</tr>
</tbody>
</table>
5.7 Performance Evaluation
In this section, the efficiency of the Value index which is based on the objects has been studied. As mentioned above, the system supports search by content and structure. To achieve this goal, our index provides mechanisms to process the content and structure efficiently. Structure and Content Indexes are combined to answer regular path queries with predicates over values.
We rely on our indices in finding the value predicates before finding and matching the node position. The rationale is that content search normally results in high selectivity. By performing content search first, we can reduce the complexity of structural joins. A content search based on the specified value predicates comparison works as a filter prior to the structural search.
5.7.1 CPU Time.
We compare the time performance of our object-based value index with the standard value index. In Figure 4(a), the experiments run on Auction data set. The queries represent a combination of different criteria as mentioned in Section 5.6.
Our index is 2 to 4 orders of magnitude more efficient than the standard one in all queries. For instance, while our index takes about 0.1735 milliseconds to retrieve one answer of QA7, the standard index, when querying data of the same size, takes almost 0.2518 milliseconds. The standard index performs well since it uses our structural index to search the structural part of the query. However, our method performs better by combining the strength of object-based structural index with the strength of object-based Value index. The objects in our Value index carry semantic meaning and in each value is stored based on their paths and tokens within an object to provide fast access to right values that match to the

value predicates. This is in contrast to using the standard value which does not carry any semantic meaning leading to consume the search time for finding the right matched values.
Figures 5(a) and 5(b), show the execution time of the queries evaluated over DBLP data set. Our experiments reveal that our Value index outperforms the standard value index. For example, QD6 retrieves 15 results from data of 3332131 node. Our index needs about 600 milliseconds to produce the results while the standard requires around 900 milliseconds. Since the standard value index is node based index, it implies that increasing the total number of nodes increases the size of the data to scan and check. This is because it does not have a specific technique assists in skipping irrelevant portion of data which exists in ours. Our Value index trims the search space based on the objects. For instance, in QD6, our method needs to access only the part of the data that related to “article node. However, the standard method searches based on the node and needs to access many nodes which do not participate in the final results.
The results of Figure 4(b) support the earlier experiment outcomes. Our index took less time to evaluate the queries over SigmodRecords dataset. We can notice that QS5 has a significant performance because all the relationships between the query nodes are P-C while other queries are hybrid of P-C and A-D. We can observed a big difference in the performance because of the type of relationships. Our method gains more benefits in improving the query performance from P-C relationship than the standard does.
### 5.7.2 The total number of scanned elements.
The main purpose of this experiment is to indicate the capability of our index to avoid scanning irrelevant portion of the data.
Figure 6 is the total number of visited elements during the evaluation of Auction data. The total number in our index is reduced by 68% compared to the standard index. As the same outcome, Figures 7(a) and 7(b) shows that the total number of visited elements are decreased by 59.6% and 77.7% for DBLP and SigmodRecords respectively. This is an evidence of the efficiency of exploiting the semantics of XML data in constructing the Value index. Since all the query nodes has a semantic connectivity between them that has a similar representation with the data, our Value index utilizes this significant which effect in the reduction the search space.
The total number of scanned elements is also affected by the type of relationships. In QS5, the average reduction is 98.5% of the total number scanned by each method. This leads us to a conclusion that the high selectivity causing by P-C relationship reduce the number of elements to check in order to produce the result.
### 5.7.3 Changing the number of branches.
We select QD5 from DBLP, with four branches and then varying the number of branches from 2 to 4 as in Figure 8. The CPU time to process the queries is shown in Figure 9(a) whereas the total number of checked elements are shown in Figure 9(b). It can be seen that the CPU cost of both methods increases as the number of branches in the queries increases. However, the cost of standard value index is much more than the cost of the Value index. This is because that standard index needs to scan elements more than ours as shown in Figure 9(b).
Figure 8: The query used in the experiment of changing the number of branches
(a) Two branches
(b) Three branches
(c) Four branches
Figure 9: Varying the number of branches
(a) The elapsed time in milliseconds
(b) The total number of scanned elements
6 Conclusion and Future Work
This paper proposes the Structural index to handle the structural part of CAS queries and the Content index to handle the content part. The indices utilize the semantics of XML schema and XML data in their construction. In addition, this paper introduced the query processing algorithms on the proposed indices. The performance evaluation proves the benefits gained from applying the semantic-based indices in trimming the search space and avoiding unnecessary data scanning. The evaluation results on different XML datasets indicate that our proposed method provides a performance improvement by applying the semantic concepts in its Content index over the non-semantic index. Due to space limitation, we are not able to describe the accuracy of our proposed indexing scheme in details within this paper. However, we can report that based on the data sets that we have used in this paper, the accuracy of the query results is approximately 97%. The details of this accuracy experiments by calculating the precision and recall will be in the extended version of this paper. Our future work will include other types of predicates. In fact, Boolean predicates are an important part of the query. Since they have not been widely investigated, in the next work, we would like to focus on this part of the query especially when all the Boolean operators, i.e. AND, OR, NOT, come in a single query.
References
|
{"Source-Url": "http://crpit.com/confpapers/CRPITV137Alghamdi.pdf", "len_cl100k_base": 10317, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 37709, "total-output-tokens": 11639, "length": "2e13", "weborganizer": {"__label__adult": 0.0003817081451416016, "__label__art_design": 0.0005741119384765625, "__label__crime_law": 0.0005173683166503906, "__label__education_jobs": 0.0020599365234375, "__label__entertainment": 0.00012362003326416016, "__label__fashion_beauty": 0.0002541542053222656, "__label__finance_business": 0.0007076263427734375, "__label__food_dining": 0.00039768218994140625, "__label__games": 0.000621795654296875, "__label__hardware": 0.0008563995361328125, "__label__health": 0.0006699562072753906, "__label__history": 0.0005097389221191406, "__label__home_hobbies": 0.000141143798828125, "__label__industrial": 0.0005393028259277344, "__label__literature": 0.000606536865234375, "__label__politics": 0.0003216266632080078, "__label__religion": 0.0005364418029785156, "__label__science_tech": 0.17529296875, "__label__social_life": 0.0001595020294189453, "__label__software": 0.0251007080078125, "__label__software_dev": 0.78857421875, "__label__sports_fitness": 0.0002343654632568359, "__label__transportation": 0.0006275177001953125, "__label__travel": 0.00024890899658203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47106, 0.0343]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47106, 0.37465]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47106, 0.87247]], "google_gemma-3-12b-it_contains_pii": [[0, 4885, false], [4885, 11514, null], [11514, 14012, null], [14012, 20956, null], [20956, 26253, null], [26253, 31279, null], [31279, 35255, null], [35255, 39845, null], [39845, 43210, null], [43210, 47106, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4885, true], [4885, 11514, null], [11514, 14012, null], [14012, 20956, null], [20956, 26253, null], [26253, 31279, null], [31279, 35255, null], [35255, 39845, null], [39845, 43210, null], [43210, 47106, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47106, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47106, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47106, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47106, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47106, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47106, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47106, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47106, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47106, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47106, null]], "pdf_page_numbers": [[0, 4885, 1], [4885, 11514, 2], [11514, 14012, 3], [14012, 20956, 4], [20956, 26253, 5], [26253, 31279, 6], [31279, 35255, 7], [35255, 39845, 8], [39845, 43210, 9], [43210, 47106, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47106, 0.1087]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
582ed85dc886146e1a6844a975457b22fe0398f5
|
A language for automatically enforcing privacy policies
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>As Published</td>
<td><a href="http://dx.doi.org/10.1145/2103656.2103669">http://dx.doi.org/10.1145/2103656.2103669</a></td>
</tr>
<tr>
<td>Publisher</td>
<td>Association for Computing Machinery (ACM)</td>
</tr>
<tr>
<td>Version</td>
<td>Author's final manuscript</td>
</tr>
<tr>
<td>Accessed</td>
<td>Fri Dec 14 22:23:37 EST 2018</td>
</tr>
<tr>
<td>Citable Link</td>
<td><a href="http://hdl.handle.net/1721.1/72667">http://hdl.handle.net/1721.1/72667</a></td>
</tr>
<tr>
<td>Terms of Use</td>
<td>Creative Commons Attribution-Noncommercial-Share Alike 3.0</td>
</tr>
<tr>
<td>Detailed Terms</td>
<td><a href="http://creativecommons.org/licenses/by-nc-sa/3.0/">http://creativecommons.org/licenses/by-nc-sa/3.0/</a></td>
</tr>
</tbody>
</table>
Abstract
It is becoming increasingly important for applications to protect sensitive data. With current techniques, the programmer bears the burden of ensuring that the application’s behavior adheres to policies about where sensitive values may flow. Unfortunately, privacy policies are difficult to manage because their global nature requires coordinated reasoning and enforcement. To address this problem, we describe a programming model that makes the system responsible for ensuring adherence to privacy policies. The programming model has two components: 1) core programs describing functionality independent of privacy concerns and 2) declarative, decentralized policies controlling how sensitive values are disclosed. Each sensitive value encapsulates multiple views; policies describe which views are allowed based on the output context. The system is responsible for automatically ensuring that outputs are consistent with the policies. We have implemented this programming model in a new functional constraint language named Jeeves. In Jeeves, sensitive values are introduced as symbolic variables and policies correspond to constraints that are resolved at output channels. We have implemented Jeeves as a Scala library using an SMT solver as a model finder. In this paper we describe the dynamic and static semantics of Jeeves and the properties about policy enforcement that the semantics guarantees. We also describe our experience implementing a conference management system and a social network.
Categories and Subject Descriptors D.3.3 [PROGRAMMING LANGUAGES]: Language Constructs and Features
General Terms Languages, security
Keywords Language design, run-time system, privacy, security
1. Introduction
As users share more personal data online, it becomes increasingly important for applications to protect confidentiality. This places the burden on programmers to ensure compliance even when both the application and the policies may be evolving rapidly.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
Ensuring compliance with privacy policies requires reasoning globally about both the flow of information and the interaction of different policies affecting this information. A number of tools have been developed to check code against privacy policies statically [4, 19] and dynamically [27]. While these checking tools can help avoid data leaks, the programmer is still responsible for implementing applications that display enough information to satisfy the user’s needs without violating privacy policies. The programming model that we propose goes beyond checking to simplify the process of writing the code that preserves confidentiality.
The main contribution of this paper is a new programming model that makes the system responsible for automatically producing outputs consistent with programmer-specified policies. This automation makes it easier for programmers to enforce policies specifying how each sensitive value should be displayed in a given context. The programming model has two components: a core program representing policy-agnostic functionality and privacy policies controlling the disclosure of sensitive values. This separation of policies from core functionality allows the programmer to express policies explicitly in association with sensitive data rather than implicitly across the code base. The declarative nature of policies allows the system to ensure compliance even when these policies interact in non-trivial ways.
We have implemented this programming model in a new functional constraint language named Jeeves. Jeeves introduces three main concepts: sensitive values, policies, and contexts. Sensitive values are introduced as pairs \( \langle v_L, v_T \rangle \), where \( v_L \) is the low-confidentiality value, \( v_T \) is the high-confidentiality value, and \( \ell \) is a level variable that can take on the values \( \perp, \top \) and determines which view of the value should be shown. Policies correspond to constraints on the values of level variables. The language of policies is a decidable logic of quantifier-free arithmetic constraints, boolean constraints, and equality constraints over records and record fields. A policy may refer to a context value characterizing the output channel and containing relevant information about how the data is viewed.
For example, in a small social networking application we implemented as a case study, we included a policy that allows users to restrict the disclosure of their location to users in their geographic vicinity. Because the location is a sensitive value, a function such as print that tries to output a value derived from a location will need to be passed a context containing the location of the user to whom this value is about to be displayed. Using this context, the runtime system can then derive an output that is consistent with the policy.
We formally specify Jeeves to show that the high-confidentiality component of a sensitive value can only affect program output if the policies allow it. We define Jeeves in terms of \( \lambda_\ell \), a constraint functional language that describes the propagation and enforcement of policies in Jeeves. \( \lambda_\ell \) is different from existing constraint functional languages [9, 10, 15, 18, 25] in the restrictions it places on the logical model and its use of default logic to provide determinism in program behavior. These restrictions make it possible for \( \lambda_\ell \) to have an efficient execution model without sacrificing too much expressiveness. There is a straightforward translation from Jeeves...
to $\lambda_J$: Jeeves level variables for sensitive values are logic variables, policies are assertions, and all values depending on logic variables are evaluated symbolically. The symbolic evaluation and constraint propagation in $\lambda_J$ allow Jeeves to automatically enforce policies about information flow.
We implemented Jeeves as a domain-specific language embedded in Scala [20] using the Z3 SMT solver [17] to resolve constraints as a way to demonstrate the feasibility of the Jeeves programming model. To evaluate the expressiveness of Jeeves, we have used our Scala embedding to implement a small conference management system and a small social network. The case studies show that Jeeves allows the separate implementation of functionality and policies. For example, in the conference management example the code makes no attempt to differentiate between users, or even the general public, yet the policies ensure that the system displays the right level of information to each user through every phase of the review process.
In summary, we make the following contributions in this paper:
- We present a programming model and language, Jeeves, that allows programmers to separate privacy concerns from core program functionality.
- We formalize the dynamic and static semantics of Jeeves in terms of a $\lambda_J$, a new constraint functional language. We prove that Jeeves executions satisfy a non-interference property between the desired policies and allows the programmer to separately develop core functionality and policies.
- We describe the implementation of Jeeves as an embedded domain-specific language in Scala using the Z3 SMT solver.
- We describe small case studies that show that Jeeves supports the desired policies and allows the programmer to separately develop core functionality and policies.
### 2. Delegating Privacy to Jeeves
Jeeves allows the programmer to specify policies explicitly and upon data creation rather than implicitly across the code base. The Jeeves system trusts the programmer to correctly specify policies describing high- and low-confidentiality views of sensitive values and to correctly provide context values characterizing output channels. The runtime system is responsible for producing outputs consistent with the policies given the contexts. Jeeves guarantees that the system will not leak information about a high-confidentiality value unless the policies allow this value to be shown.
In this section, we introduce a simple conference management example to explain the main ideas in Jeeves: introducing sensitive values, writing policies, providing contexts, and implementing the core program logic. Conference management systems have simple information flow policies that are difficult to implement given the interaction of features. Being able to separately specify the policies allows the core functionality to be quite concise. In fact, we can write a single program that allows all viewers to access the list of papers directly for searching and viewing. The program relies on the Jeeves runtime system to display the appropriately anonymized information for reviewers vs. the general public.
For the sake of brevity, we present Jeeves using an ML-like concrete syntax, shown in Figure 1.
#### 2.1 Introduction to Jeeves
We first describe how to introduce sensitive values, use them to compute result values, and display the results in different output contexts. Suppose we have the policy that a sensitive value name should be seen as "Alice" by users with a high confidentiality level and as "Anonymous" by anybody else. A Jeeves program can use the name value as follows:
```
let msg = "Author is " + name
print { alice } msg (= Output: "Author is Alice")
print { bob } msg (= Output: "Author is Anonymous")
```
To achieve the different outputs for alice and bob, we associate a policy with name by declaring it through the following Jeeves code:
```
let name = level a in
policy a: ! (context = alice) then in
(<Anonymous"> | "Alice">)(a)
```
This code introduces a level variable a and associates it with a policy that if the context value is not alice, then the value is ⊥. The context value represents a viewer in the output channel. The code then attaches this policy to the sensitive value $<"Anonymous"> | "Alice">(a)$, which defines the low-confidentiality view as the string "Anonymous" and the high-confidentiality view as "Alice". When this code is executed, the Jeeves runtime ensures that only the user alice can see her name appearing as the author in the string msg. User bob sees the string "Author is Anonymous".
Each sensitive value defines a low-confidentiality and high-confidence view for a value. The Jeeves program defines sensitive values by introducing a tuple $<v_1 | v_\top>$ where $v_1$ is the low-confidentiality value, $v_\top$ is the high confidentiality value, and $\ell$ is a level variable associated with a set of policies determining which of the two values to show. An expression containing $n$ sensitive values can evaluate to one of $2^n$ possible views. Level variables provide the means of abstraction to specify policies incrementally and independently of the sensitive value declaration. Level variables can be constrained directly (by explicitly passing around a level variable) or indirectly (by constraining another level variable when there is a dependency). It is possible to encode more than two privacy levels, but for the sake of simplicity the paper assumes only two.
Policies, introduced through policy expressions, provide declarative rules describing when to set a level variable to $\top$ or $\bot$. Notice that the policy above forces a to be $\bot$ when the user is not alice; other policies could further restrict the level variable to be $\bot$ even for alice, but no amount of policy interactions can allow a different user to see the $v_\top$ value in contradiction with the policy. Policies may mention variables in scope and also the context variable, which corresponds to an implicit parameter characterizing the output channel.
The context construct relieves the programmer of the burden of structuring code to propagate values from the output context to the policies. Statements such as print that release information to the viewer require a context parameter. The Jeeves runtime system propagates policies associated with sensitive values so that when a value is about to be displayed through an output channel, the right context can be inserted into the policy and the appropriate result can be produced. In addition to print shown above, other output channels include sending e-mail and writing to file.
#### 2.2 Declarative and Decentralized Policies
We now describe how to write policies in Jeeves using fragments of our conference management example. The paper record is defined below; it assumes single author papers to simplify the presentation.
---
### Figure 1: Jeeves syntax.
```
Level ::= ⊥ | ⊤ | Exp levels
Exp ::= v | Exp1 (op) Exp2 expressions
if Exp1 then Exp2 else Exp1
Exp1 Exp2
(OPExp) (f)
level ℓ in Exp
policy ℓ: Exp then Level in Exp
Stmt ::= let x: ℓ := Exp
print { Exp } Exp
```
---
Here is a small case study that shows that Jeeves supports policies given the contexts. Jeeves guarantees that the system displays the appropriately anonymized information for reviewers vs. the general public.
We now describe how to write policies in Jeeves using fragments of our conference management example. The paper record is defined below; it assumes single author papers to simplify the presentation.
The idiomatic way of attaching policies to values is to create sensitive values for each field and then attach policies:
```scala
let mkPaper
( title : string ) ( author : string )
( reviews : review list ) ( accepted : bool option ) : paper =
level tp, authp, rp, accp in
p = { title = < "" | title >(tp)
; author = < "Anonymous" | author > (authp)
; reviews = < [] | reviews > (rp)
; accepted = < none | some accepted > (accp) }
mkPaper title tp; mkPaper author authp; mkPaper reviews rp; mkPaper accepted accp;
p
This function introduces level variables for each of the fields, creates sensitive values for each of the fields, attaches policies to the level variables, and returns the resulting paper record.
The Jeeves programmer associates policies with sensitive values by introducing level variables, attaching policies to them, and using them to create sensitive values. Consider the policy that the title of a paper should be visible to the authors of the paper, reviewers, and PC members and only visible to the general public after it is public that the paper has been accepted. We can define addTitlePolicy as follows:
```scala
let addTitlePolicy (p: paper) (a: level): unit =
policy a: ! (context.viewer.role = Reviewer
|| context.viewer.role = PC
|| context.stage = Public) then ⊥
```
This policy allows reviewers and program committee members to always see whether a paper has been accepted and for others to see this field only if the stage is Public. With this policy in place, the title field cannot leak information about the accepted tag. Even if the policy for paper titles were to drop the context.stage = Public requirement, the policy for accepted would prevent the titles of accepted papers from being leaked before the Public stage.
### 2.2.2 Circular Dependencies and Defaults
The Jeeves system can also enforce policies when there are circular dependencies between sensitive values, as could happen when a context value depends on a sensitive value. Consider the following function that associates a policy with the authors of a paper:
```scala
let addAuthorPolicy (p: paper)(n: level) : unit =
policy n:
if isAuthor p context.user ||
(context.stage = Public && isAccepted p)) then ⊥
```
This policy says that to see the author of a paper, the user must be an author or the paper must be a publicly accepted paper. Now consider functionality that sends messages to authors of papers:
```scala
let sendMsg (author: user) =
let msg = "Dear " + author.name + ...
in sendmail { user = author; stage = Review } msg
```
The policy for level variable n depends on context.user. Here, context.user is a sensitive value, as the value of the author variable depends on the viewer.
This leads to a circular dependency that makes the solution underconstrained: the value of the message recipient on the context value, which contains the message recipient. Either sending mail to the empty user or sending mail to the author is correct under the policy. The latter behavior is preferred, as it ensures that user a can communicate with user b without knowing private information about user b. The Jeeves runtime ensures this maximally functional behavior by setting level variables to ⊤ by default: if the policies allow a level variable to be ⊤ or ⊥, the value will be ⊤.
### 3. The $\lambda_J$ Language and Semantics
To more formally describe the guarantees, we define the semantics of Jeeves in two steps; first, we introduce $\lambda_J$, a simple constraint functional language based on the $\lambda$-calculus, and then we show how to translate Jeeves to $\lambda_J$. $\lambda_J$ differs from existing constraint functional languages [9, 10, 15, 25] in two key ways: 1) $\lambda_J$ restricts its constraint language to quantifier-free constraints involving boolean and arithmetic expressions over primitive values and records and 2) $\lambda_J$ supports default values for logic variables to facilitate reasoning about nondeterminism. $\lambda_J$’s restrictions on the constraint language allow execution to rely on an off-the-shelf SMT solver.
In this section, we introduce the $\lambda_J$ language, the dynamic semantics, the static semantics, and the translation from Jeeves. The $\lambda_J$ language extends the $\lambda$-calculus with defer and assert for introducing and constraining logic variables, as well as a concretize construct to produce concrete values from them. The dynamic semantics describe the lazy evaluation of expressions with logic variables and the interaction with the constraint environment. The static semantics describe how the system guarantees evaluation progress and enforces restrictions on symbolic values and recursion. The translation from Jeeves to $\lambda_J$ illustrates how Jeeves uses $\lambda_J$’s lazy evaluation and constraint propagation, combined with Jeeves restrictions on how logic variables are used, to provide privacy guarantees.
3.1 The $\lambda_3$ Language
$\lambda_3$ is the $\lambda$-calculus extended with logic variables. Figure 2 shows the abstract syntax of $\lambda_3$. Expressions ($e$) include the standard $\lambda$ expressions extended with the $\text{defer} \, \text{construct}$ for introducing logic variables, the $\text{assert} \, \text{construct}$ for introducing constraints, and the $\text{concretize} \, \text{construct}$ for producing concrete values consistent with the constraints. $\lambda_3$ evaluation produces irreducible values ($v$), which are either concrete ($c$) or symbolic ($\omega$). Concrete values are what one would expect from $\lambda$-calculus, while symbolic values are values that cannot be reduced further due to the presence of logic variables. Symbolic values also include the $\text{context} \, \text{construct}$ which allows constraints to refer to a value supplied at concretization time.
The $\text{context} \, \text{variable}$ is an implicit parameter [14] provided in the $\text{concretize} \, \text{expression}$. In the semantics we model the behavior of the $\text{context} \, \text{variable}$ as a symbolic value that is constrained during evaluation of $\text{concretize}$. $\lambda_3$ contains a $\text{let} \, \text{rec} \, \text{construct}$ that handles recursive functions in the standard way using $\text{fix}$. A novel feature of $\lambda_3$ is that logic variables are also associated with a $\text{default} \, \text{value}$ that serves as a default assumption; this is the assigned value for the logic variable unless it is inconsistent with the constraints. The purpose of default values is to provide some determinism when logic variables are underconstrained.
3.2 Dynamic Semantics
The $\lambda_3$ evaluation rules extend $\lambda$-calculus evaluation with constraint propagation and symbolic evaluation of logic variables. Evaluation involves keeping track of constraints which are required to be true (hard constraints) and the set of constraints we use for guidance if consistent with our hard constraints (default assumptions). To correctly evaluate conditionals with symbolic constraints, we also need to keep track of the (possibly symbolic) path condition. Evaluation happens in the context of a path condition $G$, an environment $\Sigma = \emptyset$, storing the current set of constraints, and an environment $\Delta = \emptyset$, $\alpha$-renaming of the set of constraints on default values for logic variables. Evaluation rules take the form $G \vdash (\Sigma, \Delta, e) \rightarrow (\Sigma', \Delta', e')$.
Evaluation produces a tuple $(\Sigma', \Delta', e')$ of a resulting expression $e'$ along with modified constraint and default environments. In Figure 3 we show the small-step dynamic $\lambda_3$ semantics.
3.2.1 Evaluation with Logic Variables
$\lambda_3$ has the expected semantics for function application and arithmetic and boolean operations. The $\text{E-App1}$, $\text{E-App2}$, and $\text{E-AppLAMBD}A$ rules describe a call-by-value semantics. The $\text{E-OP1}$ and $\text{E-OP2}$ rules for operations show that the arguments are evaluated to irreducible expressions and then, if both arguments become concrete, the $\text{E-OP} \, \text{rule}$ can be applied to produce a concrete result.
Conditions whose evaluation produce concrete values evaluate according to the $\text{E-CONDTRUE}$ and $\text{E-CONDfalse}$ rules as one would expect. When the condition evaluates to a symbolic value, the whole conditional evaluates to a symbolic $\text{if-then-else}$ value by evaluating both branches as described by the $\text{E-CONDSYM}$ and $\text{E-CONDSYM} \, \text{rules}$. Note that $\lambda_3$ expressions are pure (effects are only in Jeeves statements) so side effects cannot occur in conditional branches.
Evaluating under symbolic conditions is potentially dangerous because evaluation of such conditionals with a recursive function application in a branch could lead to infinite recursion when the condition is symbolic. Our system prevents this anomalous behavior by using the type system to enforce that recursive calls are not made under symbolic conditions (Section 3.3).
3.2.2 Introduction and Elimination of Logic Variables
In $\lambda_3$, logic variables are introduced through the $\text{defer}$ keyword. To illustrate the semantics of $\text{defer}$ consider the example below:
$$\text{let } \ x \ = \ \text{defer} \ x' \ : \ \text{int} \ \{ \ x' > 0 \} \ \text{default} \ 42$$
As we show in the $\text{E-DEFER}$ evaluation rule, the right-hand side of the assignment evaluates to an $\alpha$-renamed version of the logic variable $x'$. Evaluation adds the constraint $G \Rightarrow x' > 0$ to the constraint environment and the constraint $G \Rightarrow x' = 42$ to the default constraint environment. The constraint $G \Rightarrow x' > 0$ is a hard constraint that must hold for all handled outputs, while $G \Rightarrow x' = 42$ is a constraint that is only used if it is consistent with the resulting logical environment. Hard constraints are introduced within the braces ($\{$) of $\text{defer}$ expressions and through $\text{assert}$ expressions; soft constraints are introduced through the $\text{default}$ clause of $\text{defer}$ expressions.
In addition to the constraints in $\text{defer}$, the program can introduce constraints on logic variables through $\text{assert}$ expressions. The $\text{E-ASSERT}$ rule describes how the constraint is added to the constraint environment, taking into account the path condition $G$. For instance, consider the following code:
$$\text{if } (x > 0) \ \text{then assert } (x = 42) \ \text{else assert } (x = -42).$$
Evaluation adds to the constraint environment the constraints $x > 0 \Rightarrow x = 42$ and $\neg (x > 0) \Rightarrow x = -42$.
Symbolic expressions can be made concrete through the $\text{concretize}$ construct. Evaluation of $\text{concretize}$ expressions either produces a concrete value or an error. A $\text{concretize}$ expression includes the expression to concretize and a context:
$$\text{let } \ \text{result} \ = \ \text{concretize} \ x \ \text{with} \ 42.$$
The model procedure in the E-CONCRETIZE rule is the model finding procedure for default logic [1]. The default environment \( \Delta \) and constraint environment \( \Sigma \) specify a supernormal default theory \((\Delta, \Sigma)\) where each default judgement \( \sigma \in \Delta \) has the form
\[
\text{true : } \sigma.
\]
The model procedure produces either a model \( \mathcal{M} \) for the theory if it is consistent, or UNSAT. We use a fixed-point algorithm for MODEL that uses classical SMT model-generating decision procedures and iteratively saturates the logical context with default judgements in a non-deterministic order.
### 3.3 \( \lambda J \) Static Semantics
The \( \lambda J \) static semantics ensures that evaluation produces either a concrete expression or a well-formed symbolic expression. Recall that symbolic expressions must be valid constraints, which include arithmetic, boolean, and conditional expressions but not functions. (In the \( \lambda J \) semantics we do not explicitly address data structures such as lists. Data structures are also required to be concrete but may have symbolic elements.) Thus the static semantics guarantees that 1) concrete values are supplied when concrete values are expected, 2) symbolic values are well-formed, 3) evaluation under symbolic conditions does not cause unexpected infinite recursion, and 4) context values have the appropriate types. The type system therefore ensures that the logical state will always be well-formed, although it cannot guarantee that it will be logically consistent.
To guarantee properties 1-3, the \( \lambda J \) type system tracks the flow of symbolic values, ruling out symbolic functions and reentrant applications under symbolic conditions. Reentrant applications are function applications that may recursively call the current function; we prevent such applications under symbolic conditions to prevent non-termination that may arise from evaluating both sides of a conditional branch with a symbolic condition. Property 4, on the other hand, is enforced by ensuring that the type of the context used at concretization is an upper bound for the types of contexts required by all the relevant policies.
We show the \( \lambda J \) types in Figure 4 and the subtyping, type well-formedness, and typing rules in Figure 5. Base types \( \beta \) are the standard \( \lambda \)-calculus types extended with the \( \text{int} \) and \( \text{bool} \) types to indicate values that are necessarily concrete. (Expressions


of function type are not permitted to be symbolic.) There are
two function types: \( \rightarrow \) for functions whose application cannot be
reentrant and \( \rightarrow \) for functions whose application can be reentrant.
We will use the term reentrant function to refer to a function whose
application can be reentrant.
Typing judgments have the form \( \Gamma; \gamma \vdash e : \tau \). A judgment says that
in the type environment \( \Gamma \) under a path of type \( \gamma \) (sym or concrete),
the expression \( e \) has type \( \tau \). \( \Gamma \) is defined as \( \Gamma := \cdot | x : \tau \mid \Gamma \). The
typing rules keep track of whether a value may be symbolic
(int or bool type) or must be concrete (int, bool, and functions).
This information is used to determine the value of the \( \gamma \) tag in the
T-COND and T-CONDSYM rules. Information about whether the
condition is symbolic is used in 1) ruling out symbolic functions
and 2) ruling out self-calls under symbolic branches.
Symbolic functions are prevented by the T-DEFER and T-CONDSYM
rules, which restrict the production of symbolic functions. The
T-DEFER rule restricts the explicit introduction of symbolic val-
ues to have base type \( \beta \), while the T-CONDSYM rule restricts the
implicit introduction of symbolic values to base type \( \beta \).
The T-LETREC rule shows that recursive functions must be considered
reentrant (have \( \rightarrow \) type) within their own definitions,
since applying the function will cause a recursive call to the current
function. Outside their declaration, on the other hand, they can have
\( \rightarrow \rightarrow \) type. A second restriction on reentrant functions is imposed
by the type well-formedness predicate \( rep \), which requires that
functions taking arguments that may be reentrant (\( \rightarrow \)) be themselves
labeled as reentrant. This prevents high-order functions from being
used to circumvent the restrictions on reentrant calls.
According to the rules, a reentrant call cannot occur under a
symbolic condition. The T-CONDSYM rule sets \( \gamma = \text{sym} \) when the
condition is symbolic. The T-APPCURREC rule allows applications
of recursive functions only under concrete paths. This implies that
canonical recursive functions such as factorial can only type-check
if they require a concrete argument. This restriction does not pre-
vent recursive sort or other recursive structure-traversing functions
because data structures are necessarily concrete, so conditions in-
volving their structure are also concrete.
The subtyping relationship \( < \): allows values that are necessarily
concrete to be used as potentially symbolic values. This way,
functions that require concrete values can only be applied when
concrete arguments are supplied, but a concrete value can be used
as a symbolic value (for instance, \( \text{int} \), as \( \text{int} \)). The subtyping rules
allow non-reentrant functions (\( \rightarrow \rightarrow \)) to be used as reentrant functions
(\( \rightarrow \)).
3.3.1 Contexts
We also have typing rules (not shown in Figure 5) ensuring that
contexts of the appropriate type are provided in \( \text{concretize} \) expres-
sions. In the T-CONCRETE rule, the context typing judgment \( l < c \)
ensures that the context type supplied is the context type expected.
The context typing judgement is \( \Gamma \vdash \tau \), where \( \tau \) is the context
type of an expression. The rules propagate the context type, enforce
that matching contexts are provided for sub-expressions, and enforce
that the correct context type is supplied at concretization.
We define a lattice describing when different context types may be
combined. The bottom of the lattice is \( \bot \) and for all types \( \tau \), we
have the relationship \( \bot \leq l \lessdot \tau \). Contexts support width subtyping on
record types:
\[
\text{record } m \lessdot l_1 \text{ record } n \lessdot l_2. \forall m_i. (\exists m_i. m_i = n_i).
\]
A record with fields \( m \) can be used as a context whenever a record
with fields \( n \) expected as long as the labelled fields of \( m_i \) are a
superset of the labelled fields of \( n_i \).
3.4 Translation from Jeeves
There is a straightforward translation from Jeeves to \( \lambda_J \). Sensitive
values and level variables in Jeeves correspond to \( \lambda_J \) logic variables,
level policies correspond to \( \lambda_J \) assertions, and contextual enforce-
ment corresponds to producing concrete values consistent with the
logical environment. Default values provide determinism in handling
policy dependencies.
We show the translation of levels and sensitive values from
Jeeves to \( \lambda_J \) in Figure 6. We have the \( Exp \rightarrow e \) rule to describe how
a Jeeves expression translates to a \( \lambda_J \) expression \( e \). The translation
has the following properties: 1) level variables are the only logic
variables, 2) expressions containing sensitive values yield symbolic
results, 3) only Jeeves policies introduce assertions, and 4) the
\( \text{concretize} \) construct can only appear at the outermost level and is
associated with an effectful computation.
3.4.1 Sensitive Values
A Jeeves sensitive value \( <v_1 | v_2>(a) \) is translated to a symbolic
value equal to either \( v_1 \) or \( v_2 \) depending on the value of level variable
\( a \). Because sensitive values are symbolic, all expressions computed
from this sensitive value are subject to policies depending on the
value of level variable \( a \).
3.4.2 Level Variables
Jeeves level variables are translated to \( \lambda_J \) expressions binding a new
logic variable of \( \text{level} \) type equal to either \( \bot \) or \( \top \). The default value of
level variables is \( \top \): the constraint solving oracle first resolves
the constraint environment with the assumption that each level is
\( \top \) and only adjusts this belief if the variable must be equal to \( \bot \).
This provides the programmer with some guarantees about program
behavior when level variables are underconstrained. Underconstraint
can arise, for instance, if values in the context depend on sensitive
values.
Besides being useful in handling circular dependencies, having
the default value of level variables as \( \top \) prevents the programmer
from leaking a value as a result of an underspecified value. If a level
variable is underconstrained, policies on a subsequent variable can
affect the value it can take:
\begin{align*}
1 & \text{let } x = \text{level } a \text{ in } \langle 0 | 1 > (a) \\
2 & \text{let } y = \text{level } b \text{ in } \\
3 & \text{policy } b: \text{true then } \top \text{ in } \\
4 & \text{policy } b: x = 1 \text{ then } \bot \text{ in } \\
5 & \langle 0 | 1 > (b)
\end{align*}
If the value of \( x \) were fixed, this would yield a contradiction, but
instead these policies indirectly fix the value of \( x \) and \( a \):
\begin{align*}
\text{true} & \\
\vdash (1) b = \top & \text{ (line 3)} \\
\vdash (2) x \neq 1 & \text{ (line 4)} \\
\vdash (3) x = 0 & \text{ (line 1)} \\
\vdash (4) a = \bot & \text{ (line 1)}
\end{align*}
Making underconstrained level variables \( \top \) by default forces pro-
grammers to explicitly introduce policies setting level variables to
\( \bot \). For this reason, underspecification will only cause level variables
to be set to \( \bot \) instead of \( \top \).
3.4.3 Declarative Constraint Policies
As we show in Table 6, level policies are translated to \( \lambda_J \) \text{assert}
expressions. Level policies can be introduced on any logic variables
in scope and are added to the environment based on possible
path assumptions made up to that point. The policy that a Jeeves
expression \( Exp \) enforces consists of the constraint environment
produced when evaluating \( Exp \) as a \( \lambda_J \) expression. More specifically,
we are talking about \( \Sigma, \Delta^* \) where \( Exp \rightarrow e \) and \( e \rightarrow^* \langle \Sigma, \Delta^*, v \rangle \). This policy contains
constraints determining whether level variables can be \( \bot \) or \( \top \).
\( \vdash t_1 : t_2 \)
**Figure 5:** Static semantics for \( \lambda_j \) describing simple type-checking and enforcing restrictions on scope of nondeterminism and recursion. Recall that \( \beta \) refers to base (non-function) types.
\[ t \to \top \]
**Figure 6:** Translation from Jeeves to \( \lambda_j \)
3.4.4 Contextual Enforcement at Output Channels
Efffectual computations such as print in Jeeves require contexts corresponding to the viewer to whom the result is displayed. As we show by the TR-PRINT rule, concretize is inserted in the translation. Because sensitive values can only produce concrete values consistent with the policies, this ensures enforcement of policies at output channels.
4. Properties
We describe more formally the guarantees that Jeeves provides. We show progress and preservation properties for \( \lambda S \). We show that the only way the value for the high component of a sensitive value to affect the output of the computation is if the policies permit it.
4.1 Progress and Preservation
We first show the correctness of evaluation. We can prove progress and preservation properties for \( \lambda S \): evaluation of an expression \( e \) always results in a value \( v \) and preserves the type of \( e \), including the internal nondeterminism tag \( \delta \).
There are two interesting parts to the proof: showing that all function applications can be reduced and showing that all defer and assert expressions can be evaluated to produce appropriate constraint expressions. We can first show that the \( \lambda S \) type system guarantees that all functions are concrete.
**Lemma 1 (Concrete Functions).** If \( \upsilon \) is a value of type \( \tau_1 \rightarrow \tau_2 \), then \( \upsilon = \lambda x : \tau_1 . e \), where \( e \) has type \( \tau_2 \).
**Theorem 4.2 (Progress).** Suppose \( e \) is a closed, well-typed expression. Then \( e \) is either a value \( \upsilon \) or there is some \( e' \) such that \( \vdash (0,0,e) \rightarrow (\Sigma',\Delta',e') \).
**Proof.** The proof mostly involves induction on the typing derivations. One interesting case is ensuring that MODEL will either return a valid model \( M \) or UNSAT for the E-CONCRETIZE/UNSAT and E-CONCRETIZE/UNSAT rules. Since the \( \lambda S \) type system rules out symbolic functions, only well-formed constraints can be added. The other interesting case is function applications \( e = e_1 e_2 \), where \( e_1 \) and \( e_2 \) are well-typed with types \( \tau_1 \rightarrow \tau_2 \) and \( \tau_1 \). We can rule out the cases when \( e_1 \) and \( e_2 \) are not values by applying the induction hypothesis. For the case when \( e_1 \) and \( e_2 \) are both values, we can apply the Concrete Functions Lemma to deduce that \( e_1 \) must have the form \( \lambda x : \tau_1 . e \), where \( e : \tau_1 \). In this case, we can apply the E-APPABS rule.
We can also prove a preservation theorem that evaluation does not change the type of a \( \lambda S \) expression.
**Theorem 4.2 (Preservation).** If \( \Gamma \vdash e : \tau \delta \) and \( e \rightarrow e' \), then \( \Gamma \vdash e' : \tau \delta \).
**Proof.** We can show the preservation of both \( \tau \) and \( \delta \) by induction on the typing derivation. The \( \delta \) value for all evaluation rules except for the E-CONCRETIZE rules is the same for both sides.
4.2 Confidentiality Theorem
We show that level variables enforce the confidentiality of values: once the policy sets a level variable \( \ell = L \), where we have some \( \text{Exp}(\text{Exp}_0) \), the output will be derived as if \( \text{Exp}_0 \) were not involved in evaluation at all. Because we have \( \ell = L \) if and only if we have policies that require \( \ell = L \), Jeeves programmers can rely on policies to enforce confidentiality.
We first prove that the high-confidence views of the sensitive values are protected by level variables. We can show that for a sensitive value \( v = \langle \text{Exp} \rangle \text{Exp}_0 \), the only way the value for the high component \( \text{Exp}_0 \) may affect the output of the computation is when \( \ell = \top \) is consistent with the policies. It is impossible for an observer to distinguish between \( v = \langle \text{Exp} \rangle \text{Exp}_0 \) and \( v' = \langle \text{Exp} \rangle \text{Exp}'_0 \) if the policy requires \( \ell = \bot \).
**Theorem 4.3 (View Non-Interference).** Consider a sensitive value \( V = \langle E \rangle (H) \) in a Jeeves expression \( E \). Assume:
\[
\begin{align*}
E[H \mapsto E_h] \mapsto e & \quad \vdash (0,0,e) \rightarrow (\Sigma,\Delta,\sigma) \\
E[H \mapsto E'_h] \mapsto e' & \quad \vdash (0,0,e') \rightarrow (\Sigma,\Delta,\sigma')
\end{align*}
\]
For any context value \( v \), if
\[
\Sigma \cup \{ \text{context} = v \} \vdash \ell = \bot
\]
\[
\Sigma' \cup \{ \text{context} = v \} \vdash \ell = \bot
\]
then
\[
\{ e | \vdash (\Sigma,\Delta,\text{concretize} \sigma) \} \Rightarrow \{ e' | \vdash (\Sigma',\Delta,\text{concretize} \sigma') \}
\]
**Proof.** From the rules of Jeeves translation, \( V \) maps to an irreducible symbolic expression if \( \ell \) then \( e_h \) else \( e_l \) in \( e \) where \( E_h \mapsto e_h \) and \( E_h \mapsto e_h \). Thus, we can say that \( e' \) is with the expression \( e_h \) replaced by \( e'_h \) where \( E'_h \mapsto e'_h \). In addition, we also know that both \( e \) and \( e' \) are \( \lambda S \) expressions with no concretize sub-expressions. This makes evaluation of \( e \) and \( e' \) deterministic and allows us to put their derivation trees in correspondence. There are two places where evaluation differs: (1) reduction of \( e_h \) and \( e'_h \) (rule E-COND/SYM) and (2) substitution of the reduced sensitive value (rule E-APPL/SYM). Let us understand how they affect the logical environment and the resulting symbolic value.
The values \( \sigma \) and \( \sigma' \) may differ only in the subexpressions \( e_h \) and \( e'_h \) reduce to. These subexpressions are guarded by the level variable \( \ell \). Since the logical environments entail that \( \ell = \bot \) under context \( v \), any model chosen at concretization sets \( \ell = \bot \). Therefore, under such models \( \sigma \) and \( \sigma' \) evaluate to the same value. Then to show that the set of concrete values is the same, it suffices to show the models of \( (\Delta,\Sigma) \) are models of \( (\Delta',\Sigma') \) and vice versa.
The dynamic semantics populates \( \Sigma \) and \( \Sigma' \) with the same constraints (modulo substitution of the sensitive variable) except during reduction of \( e_h \) and \( e'_h \). The constraints added at rule E-COND/SYM are all guarded by the level variable \( \ell \). Since \( \Sigma \) and \( \Sigma' \) both entail \( \neg \ell \), and these guarded constraints are implied by \( \neg \ell \), we can safely eliminate them from both \( \Sigma \) and \( \Sigma' \). That leaves us with the same set of hard constraints introduced through defer and assert expressions.
The default judgements in Jeeves all have the form \( \ell_0 = \text{true} \). Therefore, for the shared level variables \( \Delta \) and \( \Delta' \) use the same default value \( \text{true} \). The remaining level variables do not affect evaluation of \( \sigma \) and \( \sigma' \).
Our non-interference theorem allows programmers rely on policies to enforce confidentiality. In Jeeves, policies have the form \( \phi \Rightarrow (\ell = T) \) or \( \phi \Rightarrow (\ell = \bot) \). By the theorem, once the policy setting \( \ell \) to \( \bot \) is guaranteed to be added to the constraint environment, the output is going to be the same as if the high view component of the sensitive value was not involved in evaluation at all. If policies permit both \( \bot \) and \( T \) levels, then the default logic model finder will guide evaluation to a model maximizing levels set to \( T \).
Note that if policies are contradictory and the set of constraints is unsatisfiable, the evaluation halts with an error and no value is exposed. The theorem still holds and this behavior is safe.
5. Scala Embedding
We have implemented Jeeves as an embedded domain-specific language in Scala programming language [20]. Scala’s overloading capabilities offer us the necessary flexibility in designing a domain specific language for \( \lambda S \) with the benefit of interoperability with existing Java technology.
In this section we discuss our Scala embedding of λJ and our implementation of the Jeeves library on top of that. We describe how we used features of Scala to implement λJ’s lazy evaluation of symbolic expressions, how we collect constraints, and how we interact with the Z3 SMT solver. On top of the functional model we have presented, we also handle objects and mutation.\footnote{The code is publicly available at http://code.google.com/p/scalasmt/}
5.1 ScalaSMT: Scala Embedding of λJ
Every kind of symbolic expression in λJ has a corresponding Scala case class, for instance IntExpr corresponding to symbolic integer expressions. Arithmetic and boolean operators are defined as methods constructing new expressions. We use implicit type conversions to lift concrete Scala values to symbolic constants. Scala’s type inference resolves \(x+1\) to \(x,+(\text{Constant}(1))\) which in turn evaluates to \(\text{Plus}(x, \text{Constant}(1))\), where \(x\) is a symbolic integer variable. Implicit type conversion allows us to use concrete expressions in place of symbolic ones but requires type annotations where a symbolic expression is expected to be used.
The three core language extensions defer, assert, and concretize are implemented as library calls. We implement the library as a Scala trait that maintains the logical and default constraint environments as lists of symbolic boolean expressions. Calls to concretize invoke an off-the-shelf SMT solver \cite{z3} for the satisfiability query MODEL. We translate λJ constraints to the QF_LIA logic of SMT-LIB2 \cite{lib2} and use incremental scripting to implement the default logic decision procedure. Concretization in ScalaSMT differs from λJ in two ways. First, concretize accepts an arbitrary boolean expression rather than a context equality predicate. Second, concretize is not allowed to be a part of a symbolic expression in ScalaSMT. Since concretization generally happens as part of print routine, this restriction does not affect our case studies.
In addition to boolean and integer constraints, the Scala embedding supports symbolic expressions for objects corresponding to λJ records with equality theory. Objects are modeled as a finite algebraic data type in Z3 \cite{z3}. The set of available objects is maintained by ScalaSMT using registration of instances of a special trait Atom. Object fields are modeled as total functions interpreted at the time of concretization. Fields are (sort-)typed with values that are arbitrary ScalaSMT expressions and constants. ScalaSMT does not check types of symbolic object expressions: we rely on Scala’s support for dynamic invocation to resolve field dereferences. We use special zero values (null, 0, or false) to represent undefined fields in SMT.
ScalaSMT does not support symbolic collections. Instead, we use implicit type conversions to extend the standard Scala collection library with filter and has methods that take symbolic arguments. The argument to filter is a function \(f\) from an element to a symbolic boolean. It maps every element \(o\) to conditional expression \(f(o)\) or ELSE NULL. Method has takes a symbolic object \(o\) and produces a disjunction of equalities between elements of the collection and \(o\).
5.2 Jeeves as a Library in Scala
We have implemented Jeeves as a library on top of ScalaSMT. Our library has function calls corresponding to Jeeves’s sensitive values, level construct, policy construct, and contextual output functions (see Figure 7).
Levels are introduced using mkLevel method that returns a logical level variable which can be either \(T\) or \(⊥\). Sensitive values are created with mkSensitive methods that take a level variable together with high and low values. Context is a logical object variable CONTEXT. To introduce a level policy, the programmer calls policy method and supplies a level variable, the desired level, and a boolean condition. The boolean condition is passed by name to delay its evaluation until concretization. This way policies that refer to mutable parts of the heap will produce correct constraints for the snapshot of the system at concretization.
The Jeeves library supports mutation in variables and object fields by treating the mutable state as part of the context in concretize call to ScalaSMT. Mutable fields are interpreted at the time of concretize. Policies that depend on mutable state are evaluated to boolean conditions during concretization. The set of allocated JeevesRecords is supplied at concretization. These conditions together with the equality predicate CONTEXT = ctx are used to concretize expressions in ScalaSMT.
6. Experience
We have implemented a conference management system and a social network. Our experience suggests that Jeeves allows the programmer to separate the “core,” non-privacy-related functionality from the privacy policies, allowing the programmer to separately test policies and functionality.
6.1 Conference Management System
We have implemented a simple conference management system backend, JConf, to demonstrate how a well-known system with privacy concerns looks in Jeeves. This system is similar to the example we described in Section 2. Our implementation demonstrates that Jeeves allows us to implement all JConf functionality, including search and display over final paper versions, with a core functionality that is separate from the policies.
JConf supports the following subset of the functionality mentioned on the website for the HotCRP conference management system \cite{hotcrp}: smart paper search (by ID, by reviewer, etc.), paper tagging (for instance, “Accepted” and “Reviewed by: . . . ”) and search by tags, managing reviews (assigning, collecting responses, and displaying), and managing final paper versions. JConf does not implement functionality for which confidentiality is less key: for instance, the process of bidding for papers.
All JConf core functionality adheres to the privacy policies. JConf implements the following information flow policies:
- **Paper titles** are visible to the authors of the paper, reviewers, and PC members during all stages. Paper titles are visible to everyone during the public stage.
- **Author names** are visible to the authors on the paper during all stages, to reviewers and PC members during and after the rebuttal stage, and to everyone during the public stage if the paper has been accepted.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figure7.png}
\caption{Jeeves library in Scala}
\end{figure}
To provide an example of a data class definition, we show the
JConfBackend.scala
class PaperReview(id: Int, reviewerV: ConfUser, var body:
String, var score: Int) extends JeevesRecord {
val reviewer = {
val level = mkLevel();
val vrole = CONTEXT.viewer.role;
val isInternal = (vrole == ReviewerStatus) ||
(vrole == PCStatus)
policy(level, isInternal, ⊤);
policy(level, !isInternal,⊥);
mkSensitiv[ConfUser](level, reviewerV, NULL)
}
}
Figure 8:
- **Reviewer identities** are revealed only to PC members.
- **Reviews and scores** are revealed to authors of the paper, reviewers, and PC members after the review phase. During the review phase, reviewers must have submitted a review for a paper before they can see p’s reviews.
Our JConf implementation allows us to separate the declaration of policies and code: we show the breakdown of code and policies in Table 1. The policies are concentrated in the data classes PaperRecord.scala and PaperReview.scala, which describe the attributes and policies associated with maintaining data associate with papers and paper reviews. The other files, including JConfBackend.scala, do not contain policies. This allows the core functionality to be concise: the implementation of our back-end functionality as specified is only 56 lines.
The implementation of the core functionality of JConf is agnostic to the policies. The JConf back end stores a list of PaperRecord objects and supports adding papers, updating components of papers, and searching over papers by ID, name, and tags. We show the function to search papers by tag below:
```scala
def searchByTag(tag: PaperTag) =
papers.filter(_getTag(_, has(tag))
```
This function produces a list of symbolic PaperRecord objects which are equal to objects containing paper data if the paper tag is present and null otherwise. The core program can be concise because it does not have to be concerned with policies.
We implement policies specified in terms of program variables such as a paper’s list of tags and values from the output context. To provide an example of a data class definition, we show the definition of the PaperReview class in Figure 8. A PaperReview object has the fields reviewer, body, and score. The PaperReview class defines a policy that the identity of the reviewer as stored in the reviewer field is visible only to other reviewers and PC members. The code introduces a new level variable level, adds a policy that the context viewer must be a reviewer or PC member to see the object. The policies on allowed contexts for seeing the entire PaperReview object are defined in the PaperRecord class representing data associated with papers.
Localizing the policies with respect to data facilitates policy updates. To change at what stage of the conference when reviewers are allowed to see names of authors, we can simply change the few lines of code corresponding to the author list policy. The programmer does not have to make coordinated changes across the code base to update policies.
### 6.2 Social Network
For social networks it is important to rapidly develop code that implements information flow policies. Privacy issues have put the social network website Facebook under the scrutiny of the American Federal Trade Commission [26], making it crucial that they do not leak sensitive data. On the other side, one of Facebook’s key values is to “move fast:” rapidly develop innovative features [28]. Separation of policies and core program functionality can help developers rapidly develop privacy-aware features.
To investigate this hypothesis, we have implemented Jeeves SocialNet, a toy social network that uses Jeeves policies to control confidentiality of user-shared data. Jeeves SocialNet core functionality involves storing and allowing queries over user attributes such as names, e-mails, and networks, a friendship relation between users, and dynamically changing properties such as user location. Jeeves SocialNet allows a user to define policies about who can see which versions of these attributes based on the relationship of the viewer to the user. The system allows the user to define different versions of their information to be shown to viewers given which level they satisfy. These policies are stateful: for instance, a policy on the visibility of user u’s location refers to the location of u and the location of output viewer v. Jeeves allows the programmer to develop policies and core functionality separately. In our source, all policies reside in UserRecord class representing a user, while the query code in SocialNetBackend is left intact. The programmer can extend the SocialNetBackend arbitrarily and rely on the Jeeves system to enforce information policies. The programmer can also easily change the policies enforced across the program by changing the policy code in UserRecord.
In the rest of this section, we walk through how we implement interesting policies in Jeeves Social Net: support for user-defined policies that may depend on the friendship relation, stateful location-data policies, and policies that have mutual dependencies as a result of a symbolic context.
#### Defining Viewer Levels.
Each sensitive field in a UserRecord object is defined in terms of the level of the output viewer. We use Jeeves level variables to define three levels: Anyone, is most permissive and allows public access, Friends allows access only to friends, and Self is most restrictive and disallows access to everyone except the user herself. The following function creates level variables associated with user-defined viewer levels:
```scala
def level (ul: UserLevel) = {
val a = mkLevel();
val me = CONTEXT == this;
ul match {
case Anyone ⇒
case Self ⇒ policy(1, ! me, ⊤)
case Friends ⇒
policy(1, ! (me || friends .has(CONTEXT)), ⊥);
case _ ⇒ a
}
}
```
The CONTEXT variable refers to the user at the other end of the output channel. The mutable set of friends is encapsulated in a private field friends of UserRecord.
We use this function to create sensitive values for user fields based on user-specified viewer levels. The constructor for the
UserRecord class takes parameters nameL: UserLevel and friendL : UserLevel to specify who can see the name and friends fields. To create a sensitive property for the name of a user, passed to the constructor as nameV: string, we declare an observer field:
```scala
val name = mkSensitive(level(nameL), nameV, NULL)
```
We can create a friends list that is visible based on the friends level
```scala
friendsL as follows:
```
```scala
def getFriends() = {
val l = level(friendsL);
friends . map(mkSensitive(l, _))
}
```
When these fields are accessed, the results will only be displayed to viewers who have an appropriate level of access.
Policies become implicitly combined when different sensitive values interact. To get names of friends of a user, we simply call:
```scala
user.getFriends().map(_.name)
```
Although the code looks the same as if without Jeeves, the context user here must simultaneously be able to access the list of friends and the name property to see the name of a friend.
**Location Policy.** The location mash-up website PleaseRobMe [3] demonstrates that if disclosure of geographic location information is not carefully managed, people can easily use this information for harm, for instance in determining candidates for apartment robberies. Jeeves allows programmers to easily express policies protecting location data based on not just “friend” relationships, but also on policies involving dynamically-changing user locations.
A user may choose to share her location with friends, with users nearby, or only to friends who are nearby. To write the policy that only a nearby user can see the location, we create sensitive values for coordinates in the setter method guarded by DISTANCE policy:
```scala
var X: IntExpr = 1000
var Y: IntExpr = 1000
```
```scala
def setLocation(x: BigInt, y: BigInt) {
val l = mkLevel();
policy(l, DISTANCE(CONTEXT, this)
```
For coordinates in the setter method guarded by
```scala
DISTANCE(CONTEXT, this)
```
we have:
```scala
def DISTANCE(a: Symbolic, b: Symbolic) =
ABS(a.X − b.X) + ABS(a.Y − b.Y)
```
The policy uses sensitive values for X and Y to guard the values themselves. We can do this because whenever there are such circular dependencies, the Jeeves runtime will choose a safe but locally-maximal assignment to levels. For example, if all users in the network are nearby, it is safe to return low values for everyone. However, Jeeves would output the actual values, since that maximizes the number of “levels without sacrificing safety.
Since policies and query code are separated, to change the location policy, we only need to modify the setter. A stronger policy that permits only friends nearby to see the location requires one change to line 5 to replace mkLevel() with level (Friends).
**Symbolic Context.** Jeeves also allows the context to contain sensitive values. As an example, consider the following function, which sends a user’s name to her friends:
```scala
def announceName(u: UserRecord) =
for (f ← u.getFriends())
yield email(f, u.name)
```
The email function sends to f a concretized version of u.name with CONTEXT = f. Since the friends list is symbolic, f is symbolic as well. This means that f will take high value only if the corresponding friend of u is allowed to see the list of friends of u. The name of u is revealed only if its policies permit f to see. Because Jeeves handles circular dependencies by finding a safe but locally-maximal assignment, the Jeeves runtime system will send the name to each friend if the friend is permitted to see the name. Such reasoning about symbolic contexts is hard to simulate in runtime systems such as Resin [27] that do not use symbolic constraints.
### 6.3 Jeeves Limitations
Jeeves currently provides only a limited amount of static checking. The implementation of Jeeves as a domain-specific embedded library in Scala relies on Scala type-checking to enforce static properties. At present, Jeeves does not provide static feedback about more complex program properties. For instance, neither the Jeeves design nor the implementation provide support for statically determining whether policies are consistent or total. We anticipate being able to detect properties such as underspecification and inconsistency using enhanced static analysis that we can implement as a Scala compiler extension.
There are many open questions regarding the usability of Jeeves. Symbolic evaluation and SMT are technologies that have been improving in performance, but it is not clear they can handle the demands of real-world applications. One direction for future exploration includes scalability of Jeeves programs, how to efficiently handle data persistence, and development of lighter-weight execution models. Another direction for exploration involves the ease of programming and testability of Jeeves programs.
### 7. Related Work
Jeeves privacy policies yield comparable expressiveness to state-of-the-art languages for verifying system security such as Jif [19], Fine [4], and Ur/Web [5]. These are static approaches that have no dynamic overhead. Rather than providing support for verifying properties, the Jeeves execution model handles policy enforcement, guaranteeing that programs adhere to the desired properties by construction, but with dynamic overhead.
The Jeeves runtime is similar to the system-level data flow framework Resin [27], which allows the programmer to insert checking code to be executed at output channels. Jeeves’s declarative policies allow the programmer to specify policies at a higher level and allow automatic handling of dependencies between policies.
There are also parallels with dynamic approaches to security. Devriese and Piessens’s secure multi-execution approach executes multiple copies of the program, providing defaults to copies that should not get access to secret inputs [7]. Jeeves’s symbolic evaluation obviates the need to execute multiple program copies and Jeeves allows more complex policies, for instance ones that may depend on sensitive values. In this space is also Kashyap et al.’s scheduling-based dynamic approach, which partitions a program into sub-programs for each security level for ensuring timing and termination non-interference. The focus of this is different from our work, which does not address timing or termination.
Jeeves can also be compared to aspect-oriented programming (AOP) [11]. Existing frameworks for AOP provide hooks for explicit annotations at join points. Jeeves differs from AOP because Jeeves’s constraint-based execution model supports more of a more powerful interaction with the core program. The most similar work in AOP is Smith’s logical invariants [24] and method for generating aspect code for behavior such as error logging automatically [23]. Smith’s method is static and involves reconstructing values such as the runtime call stack in order to insert the correct code at fixed control flow points. Jeeves allows policies to affect control flow decisions.
The way Jeeves handles privacy is inspired by angelic nondeterminism [8]. Jeeves most directly borrows from CFLP-L, a constraint
functional programming calculus presented by Mück et al. [18]; similar functional logic models have also been implemented in languages such as Mercury [25], Escher [15], and Curry [9, 10]. Our system differs in the restrictions we place on nondeterminism and the execution model. \( \lambda \) leaves functions and the theory of lists out of the logical model. \( \lambda \) execution also supports default logic [1] to facilitate reasoning when programming with constraints.
Our work is also related to work in executing specifications and dynamic synthesis. Jeeves differs from existing work in executing specifications [16, 21] in our goal of propagating nondeterminism alongside the core program rather than executing isolated nondeterministic sub-procedures. Program repair approaches such as Demsly’s data structure repair [6], the Plan B [22] system for dynamic contract checking, and Kuncak et al.’s synthesis approach [13] also target local program expressions.
8. Conclusions
Our main contribution is a programming model that allows programmers to separate privacy concerns from core program functionality. We present the Jeeves programming language, formally define the system to produce outputs consistent with the policies.
Acknowledgments
We would like to thank Saman Amarasinghe, Arvind, Michael Carbin, Gregory Malecha, Sasa Misailovic, Andrew Myers, Joseph Near, Martin Rinard, and Joe Zimmerman for their input and feedback.
References
|
{"Source-Url": "http://dspace.mit.edu/openaccess-disseminate/1721.1/72667", "len_cl100k_base": 14715, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 52870, "total-output-tokens": 17813, "length": "2e13", "weborganizer": {"__label__adult": 0.00029015541076660156, "__label__art_design": 0.00024187564849853516, "__label__crime_law": 0.00027823448181152344, "__label__education_jobs": 0.00035071372985839844, "__label__entertainment": 4.9173831939697266e-05, "__label__fashion_beauty": 0.00010442733764648438, "__label__finance_business": 0.0001653432846069336, "__label__food_dining": 0.0002532005310058594, "__label__games": 0.0003745555877685547, "__label__hardware": 0.0004487037658691406, "__label__health": 0.0002536773681640625, "__label__history": 0.00014340877532958984, "__label__home_hobbies": 5.3822994232177734e-05, "__label__industrial": 0.0002033710479736328, "__label__literature": 0.00019931793212890625, "__label__politics": 0.0002218484878540039, "__label__religion": 0.0002772808074951172, "__label__science_tech": 0.005706787109375, "__label__social_life": 6.461143493652344e-05, "__label__software": 0.005207061767578125, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.00017011165618896484, "__label__transportation": 0.0003266334533691406, "__label__travel": 0.00014388561248779297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 72736, 0.01615]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 72736, 0.40993]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 72736, 0.832]], "google_gemma-3-12b-it_contains_pii": [[0, 2480, false], [2480, 8453, null], [8453, 16048, null], [16048, 21036, null], [21036, 27176, null], [27176, 29809, null], [29809, 37973, null], [37973, 38286, null], [38286, 46453, null], [46453, 52992, null], [52992, 59168, null], [59168, 66312, null], [66312, 72736, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2480, true], [2480, 8453, null], [8453, 16048, null], [16048, 21036, null], [21036, 27176, null], [27176, 29809, null], [29809, 37973, null], [37973, 38286, null], [38286, 46453, null], [46453, 52992, null], [52992, 59168, null], [59168, 66312, null], [66312, 72736, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 72736, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 72736, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 72736, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 72736, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 72736, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 72736, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 72736, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 72736, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 72736, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 72736, null]], "pdf_page_numbers": [[0, 2480, 1], [2480, 8453, 2], [8453, 16048, 3], [16048, 21036, 4], [21036, 27176, 5], [27176, 29809, 6], [29809, 37973, 7], [37973, 38286, 8], [38286, 46453, 9], [46453, 52992, 10], [52992, 59168, 11], [59168, 66312, 12], [66312, 72736, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 72736, 0.01891]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
9c69bbea20289a3c7b29674d4a5b0b1124ac3420
|
[REMOVED]
|
{"Source-Url": "http://madhu.cs.illinois.edu/CAV14ice.pdf", "len_cl100k_base": 13978, "olmocr-version": "0.1.49", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 64979, "total-output-tokens": 16924, "length": "2e13", "weborganizer": {"__label__adult": 0.0005435943603515625, "__label__art_design": 0.000759124755859375, "__label__crime_law": 0.0004649162292480469, "__label__education_jobs": 0.0072784423828125, "__label__entertainment": 0.00013363361358642578, "__label__fashion_beauty": 0.00030493736267089844, "__label__finance_business": 0.0003674030303955078, "__label__food_dining": 0.0005273818969726562, "__label__games": 0.0015344619750976562, "__label__hardware": 0.0014982223510742188, "__label__health": 0.0008339881896972656, "__label__history": 0.00045371055603027344, "__label__home_hobbies": 0.00027489662170410156, "__label__industrial": 0.0007848739624023438, "__label__literature": 0.0008034706115722656, "__label__politics": 0.00039505958557128906, "__label__religion": 0.0007829666137695312, "__label__science_tech": 0.1004638671875, "__label__social_life": 0.0002079010009765625, "__label__software": 0.008331298828125, "__label__software_dev": 0.87158203125, "__label__sports_fitness": 0.00041604042053222656, "__label__transportation": 0.0010271072387695312, "__label__travel": 0.00027871131896972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60299, 0.03451]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60299, 0.61788]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60299, 0.85333]], "google_gemma-3-12b-it_contains_pii": [[0, 2938, false], [2938, 6840, null], [6840, 10467, null], [10467, 14087, null], [14087, 17123, null], [17123, 20926, null], [20926, 24636, null], [24636, 28323, null], [28323, 32738, null], [32738, 36881, null], [36881, 40708, null], [40708, 44388, null], [44388, 48290, null], [48290, 52024, null], [52024, 54893, null], [54893, 58202, null], [58202, 58202, null], [58202, 60299, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2938, true], [2938, 6840, null], [6840, 10467, null], [10467, 14087, null], [14087, 17123, null], [17123, 20926, null], [20926, 24636, null], [24636, 28323, null], [28323, 32738, null], [32738, 36881, null], [36881, 40708, null], [40708, 44388, null], [44388, 48290, null], [48290, 52024, null], [52024, 54893, null], [54893, 58202, null], [58202, 58202, null], [58202, 60299, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60299, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60299, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60299, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60299, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60299, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60299, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60299, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60299, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60299, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60299, null]], "pdf_page_numbers": [[0, 2938, 1], [2938, 6840, 2], [6840, 10467, 3], [10467, 14087, 4], [14087, 17123, 5], [17123, 20926, 6], [20926, 24636, 7], [24636, 28323, 8], [28323, 32738, 9], [32738, 36881, 10], [36881, 40708, 11], [40708, 44388, 12], [44388, 48290, 13], [48290, 52024, 14], [52024, 54893, 15], [54893, 58202, 16], [58202, 58202, 17], [58202, 60299, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60299, 0.06557]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
f4bbbf6e374e5291ea280ce5390ab9db81a696fd
|
How a Geographically Distributed Software Team Managed to Negotiate Successfully using Chat Technology
Tenorio, Nelson; Bjorn, Pernille
Published in:
Revista Tecnologia e Sociedade
DOI:
10.3895/rts.v15n37.8655
Publication date:
2019
Document version
Publisher's PDF, also known as Version of record
Document license:
CC BY
Citation for published version (APA):
How a Geographically Distributed Software Team Managed to Negotiate Successfully using Chat Technology
ABSTRACT
Negotiation is best accomplished in collocated settings, and negotiation in geographically distributed settings is prone to failure with a risk of conflicts. Investigating distributed software development, we were surprised to discover that a software development team, located in different parts of Brazil, was able to negotiate successfully and reach an agreement to change from ticket-oriented processes towards release-oriented processes for bug fixing activities using only chat technology. In this paper, we explore how the chat technology allowed the distributed software team (including both vendor and client team members) to successfully negotiate and reach agreement about adopting and implementing a new collaborative workflow in the governmental IT-project. Our research method is based upon an ethnographically informed empirical study of the software development involved in a Brazilian software company. Thus, the data collected shows that the chat technology provided a platform for the team to engage informally in important discussions across locations. The chat technology allowed participants to navigate both within and across diverse subgroups (collocated client-developers; distributed client-developer, and distributed developers-developers), which supported successful subgroup dynamics avoiding the risk of conflicts emerging from faultlines.
INTRODUCTION
Software projects are often done in distributed settings, where clients and the software development team are geographically distributed. Despite the geographical distance, participants often work in closely-coupled work arrangements (ESBENSEN; BJØRN, 2014; CRAMTON, 2001; JENSEN, 2014), structured by different types of agile methodologies (ESBENSEN; BJØRN, 2014; ŠMITE; MOE; ÅGERFALK, 2010). Such projects depend on participants' ability to navigate, coordinate, and communicate using diverse collaborative technologies (BJØRN et al., 2014; BJØRN; HERTZUM, 2006; BODEN et al., 2014; MARK et al., 2002) in which the majority of the interaction is accomplished, i.e., chat group, online forums, video conferences, document repositories, and emails (CHRISTENSEN; BJØRN, 2014; GUO et al., 2009; SEGENREICH, 2008; DABBISH et al., 2005; HERBSLEB et al., 2002). While the interaction in software projects is multiple and diverse, we, in this paper, are particularly interested in the negotiation activities within distributed software teams.
Negotiation is a critical activity for software developers, where participants discuss and reach agreement about how and why certain details and structures are to be organized and implemented in certain ways and continue to be an activity throughout the whole project lifecycle (CHRISTENSEN; BJØRN, 2014). In geographically distributed settings, negotiations activities are facilitated and mediated by cooperative technologies (JOWETT, 2015; LI; ROSSON, 2014). However, technology-based negotiation activities have been identified as being prone to failure in geographically distributed settings. In this sense, researchers have pointed to working across time zones, culture, and professional language as some of the reasons for the challenges (MARK et al., 2002; OLSON; OLSON, 2000; VALLEY; MOAG; BAZERMAN, 1998). Given these insights from prior research, we were surprised to find that in our empirical case, where we studied a Brazilian software development team, that consisting of team members from both the vendor and the client managed to successfully negotiate. Moreover, that team implemented a new collaborative work structure using primarily text-based chat group technology.
Chat technology is of core interest to the Computer-supported Cooperative Work (CSCW) community, and the potential for using such technologies (e.g., Skype or Slack) in organizations of high complexity have been identified as an important research agenda (RIEMER; FRÖSSLER; KLEIN, 2007). Chat technology provides low-cost accessibility to team members across geography and time (MORAES; CABELLO, 2017; HSIUNG, 2000; ANDERSON; KANUKA, 1997). Moreover, chat technology can potentially facilitate closely-coupled interaction and communication within and across organizations (FAYARD; DESANCTIS, 2005; CLÉMENT; BAKER; MACINTYRE, 2003). By supporting 'lightweight' communication, chat technology provides alternative ways for participants to discover co-workers' availability, which potentially can trigger opportunistic communication supporting some degree of team context and facilitate cooperative inquiry to the entire team (HERBSLEB et al., 2002). Successful use of chat technology depends on participants' abilities to establish and develop norms, context, common language, and problem definitions across all (MALHOTRA et al., 2001). However, negotiation activities –
especially cross-organizational negotiation, where financial and political considerations exist, make the opportunity to develop common language and shared context difficult to implement, and thus developing new technologies supporting negotiation across geography continues to be a challenge (BJØRN; HERTZUM, 2005; OLSON; OLSON, 2000). Therefore, this paper aims to explore how the chat technology allowed the distributed software team, considering both vendor and client team members, to successfully negotiate and reach agreement about adopting and implementing a new collaborative workflow in the governmental IT-project. In this sense, the research question we explore in this paper is: How did the geographically distributed software development team successfully negotiate and establish a new workflow structure changing their work arrangement, using primarily group chat technology? To answer that, we performed an ethnographically informed empirical study of the software development involved in a Brazilian software company, and we collected data observing chat groups which had fifty-five negotiation cases. Based upon our empirical findings, we find that the negotiation succeeds, not just because the team developed norms and common language, but because the chat group technology facilitated grounding activities (CLARK; BRENNAN, 1991) both within and across diverse sets of subgroups involved in the negotiation, namely client-developer at the same location; client-developer across location; developer-developer across location. When geographical distributed teams are composed of collocated subgroups, there is a tendency that such subgroups coalesce into smaller units especially if the demographic attributes align with collocated subgroups, and such setups risk producing faultlines (CRAMTON; HINDS, 2004). We found that the software developers overcame the risk of faultlines in their negotiations, because of the affordances of chat technology allowed them to navigate across and within the diverse subgroups breaking down the barrier of demographic attributes and organizational belonging. The participants through cultural language exchange (ROBINSON, 1991) managed to create and navigate permanent records of decisions manifested through shared digital objects in the chat group technology. Thus, the chat group technology supporting synchronously interaction facilitating a dynamic negotiation context comprising of both informal and formal language exchange simultaneously.
The remainder of the paper is organized into six sections. Following by this introduction, we present the theoretical background of this study, then our research method, followed by the results of our analysis. Finally, we discuss our findings and provide our conclusion.
CHALLENGES FOR COOPERATIVE NEGOTIATION ACROSS GEOGRAPHICAL DISTANCE
Collaboration within geographically distributed teams is core concern for CSCW research, since its inception and there is a long cannon of research papers which have explored the challenge of distributed collaboration for the design of cooperative technologies in all kind of perspectives and in different domains (HINDS; RETELNY; CRAMTON, 2015; BODEN et al., 2014; OLSON; OLSON, 2000). One core domain for the research on distributed teams is software development (BJØRN et al., 2014), since geographically distributed software development has become the norm rather than the exception for how the work is organized when
we design IT systems (HERBSLEB, 2007). Core challenges for distributed software development have been identified as linking to temporal constraints (HERBSLEB; PAULISH; BASS, 2005), to coordination challenges (CHRISTENSEN; BJØRN, 2014), as well as to commitment and trust (SØDERBERG; KRISHNA; BJØRN, 2013). While the technological development has improved the conditions for distributed software development, one core challenge remains as to the challenge of creating common ground related to the project at hand as well as to how to collaborate (BJØRN et al., 2014).
Common ground is established through grounding in conversations, where participants provide evidence and references supporting their argumentation through aspects provided by the face-to-face shared context characterized by various aspects such as co-presence, visibility, audibility, and simultaneity (CLARK; BRENAN, 1991). This mean that whether it is possible to create common ground in distributed settings depends tremendously upon the affordances of the technology (e.g., chat, video conference, document repositories) supporting the interaction and the coordination of work (BJØRN; NGWENYAMA, 2009; HINDS; WEISBAND, 2003; ARMSTRONG; COLE, 2002; CRAMTON, 2001). Thus, to create common ground concerning the project and the process requires participants to have a fundamental basis, in this case, a shared context. That shared context can engage in the negotiations and discussions required to take important decisions facilitated by informal language constructs (ROBINSON; KOVALAINEN; AURAMÄKI, 2000). Finding ways to establish a shared context by which negotiation can take place supporting the distributed software development projects using technology is not trivial.
**Shared context and risk of faultlines**
When two or more people interact collocated, they automatically share a physical context providing rich cues such as facial expressions etc., which supports the conversation (MATTHIESEN; BJØRN, 2016; RANGANATHAN et al., 2002). A shared context can emerge, when team members share common professional language and vocabulary relevant for their work processes, work cultures, and use of digital tools potentially reducing the risk of conflicts (HINDS; MORTENSEN, 2005). However, people involved in geographically distributed projects, does not automatically can create a shared context (SCHILIT; HILBERT; TREVOR, 2002) potentially missing important contextual information, thus increasing the difficulty in identifying and solving problems, which again increase the likelihood of emerging conflicts (HINDS; MORTENSEN, 2005). Frequent interaction has been pointed to as essential for negotiation and conflict resolution CHRISTENSEN; BJØRN, 2014; HINDS; MORTENSEN, 2005; HINDS; BAILEY, 2003). However, a high number of messages that came up into communication tools risk to unshared context once depersonalizing the interaction (SPROULL; KIESLER, 1992). While technology mediated text-based interaction generates less social presence and lack social cues compared with face-to-face conversation (POSTMES; SPEARS; LEA, 1998), the more fundamental challenge is the lack of shared context creating contextual differences. Thus, that shared context is hard to be articulated and identified during the text-based chat and consequentially cause misunderstanding among the participants (HINDS; WEISBAND, 2003). It suggests that virtual teams are likely to experience more conflict in negotiation and coordinating tasks than a
collocated team (HINDS; BAILEY, 2000). Indeed, increase social presence by establishing a shared context is relevant if we are to support technology-mediated interaction between subgroups of collocated and distributed teams. Such context-aware technology is a class of communication tools, which addresses people’s knowledge context to leverage the communicative understanding (SCHILIT; HILBERT; TREVOR, 2002).
While the majority of the literature on shared context and negotiation focus on teams where all participants are geographically distributed, the situation in distributed software development is often that not every individual are geographically distributed. Instead distributed software development is often based in a situation of distributed subgroups, where several developers are collocated while subgroups are geographically distributed. When you have geographically distributed subgroups, there is a risk of faultlines. Faultlines refer to conceptual dividing lines which split a group into at least two relatively homogeneous subgroups based on group members’ demographic alignment different individual attributes that impact on group processes further outcomes both performance and emotional experience (THATCHER; PATEL, 2012; BEZRUKOVA et al., 2009; SHEN; GALLIVAN; TANG, 2008). Thus, subgroup formation influences the performance of the whole group above and beyond what can be predicted by diversity alone (THATCHER; PATEL, 2012). For instance, a faultline may occur based on education level or work experience starting entirely different dynamics in a group, i.e., group members create subgroups relatively homogeneous based on informational characteristics of individuals that are directly job-related important – in this case a faultline category is information-based (THATCHER; PATEL, 2012; BEZRUKOVA et al., 2009). When team members experience problematic subgroups dynamics, it is difficult to overcome the geographically distance (CRAMTON, 2001). To create and establish task cohesion which can counter the risk of faultlines, geographically distributed teams must develop shared norms, roles, and procedures, by which they can experience accuracy of mutual comprehension (e.g., shared context). Moreover, those teams have shared expectation regarding the common goal, how to organize the interdependency and mutual trust, as well as the frequency of communication among members (LOCKWOOD; 2017, ARMSTRONG; COLE, 2002). Therefore, successful subgroup dynamics must reduce the risk of faultlines generated by time, national/regional culture, and geographical distance, in order to integrate teams from different locations providing means to sound negotiations.
Chat technology supporting negotiations
In collocated or distributed projects, communication occurs through synchronous and asynchronous means. Asynchronous communication is considered appropriated when activities have low complexity while synchronous communication is most applicable when complex activities are involved (RIOPELLE et al., 2003). However, in distributed teams frequently synchronous interactions are embedded in a broader context of asynchronous interactions and how the informal activities are carried out by the participants (OLSON; OLSON, 2000). Chat technology refers to the type of technology which allow participants to interact asynchronously through text-based interaction such as Messenger, Skype, WhatsApp, and Slack. We are currently witnessing how chat technology in
increasingly being introduced into the workplace. The usage of chat technology is thus entering the workplaces and thus becomes part of shaping the form of communication which take place in organizations. With the introduction of chat technology, we also see a decrease in the use of email, phone calls, and other means of communication (GREIF; MILLEN, 2003). In distributed software development, software developers have used chat technology in bug fixing reducing the effort of articulation work (TENÓRIO; PINTO; BJØRN, 2018), and to coordinate their activities (BODEN et al., 2014). Chat technology offers to the software developers, new advantages for their communication, since they can be modified, reviewed, and share the complete conversation over time (VAN DER ZWAARD; BANNINK, 2014). Although miscommunications cannot easily be solved when using textual interaction (FORD et al., 2017; TERUI, K.; HISHIYAMA, 2014), the ability of the chat to send short messages using informal language offers a mean for agile communication, and messages can be saved and, occasionally, retrieved and forwarded to other groups or individuals (GREIF; MILLEN, 2003). Also, the permanent nature of chat messages can form common discussion point for participants (ROBINSON, 1991). However, formal language is not always feasible, since depending upon context, i.e., a rapidly changing project environment requires an informal communication in supported by informal language use (DE VRIES; LAGO, 2010; ÅGERFALK, FITZGERALD; HOLMSTRÖM, 2005; CLERC; HERBSLEB et al., 2000). Nonetheless, both formal and informal dialogue can obstruct the conversations, if messages are shared outside the attended audience (ROBINSON; KOVALAINEN; AURAMÄKI, 2000). Professionals who share similar perspectives through professional language and knowledge makes it easier to develop common language and norms that can form a basis of communication within the distributed team (OAKLEY, 1999). This allows for healthy interactions between distributed team members facilitates by the informal language use (HINDS; MORTENSEN, 2005). Therefore, previous researches point out how chat technology can facilitate communication at the workspace. However, our interest here is focused on how chat technology supports negotiations in geographically distributed software development team of vendors and clients.
**RESEARCH METHOD**
Our research is based upon an ethnographically informed (RANDALL; HARPER; ROUNCEFIELD, 2007) empirical study of the software development involved in a Brazilian software company. We studied the work involved in organizing the collaborative work, focusing in on the use of technological artefacts (BLOMBERG, J.; KARASTI, 2013). In particular, we were interested in how the software development team used chat technology to support the collaboration across geographical sites of design (BJØRN; BOULUS-RØDJE, 2015). In this work, we followed a software team whose worked on a governmental IT-project: E-Account. E-Account is an information system designed to support a Brazilian municipality in organizing, monitoring, and controlling public accounts. Our interest is not the content of the E-Account project, but rather the way the software development team collaborated. In total twenty-three developers were involved in the E-Account project, we focus on how these software developers who represented both the vendor and the client negotiated using chat technology. We refer to the vendor company as BrazilSoft.
Empirical settings
The E-Account project is a Brazilian governmental IT-project in which the teams are geographically dispersed with a temporal distance of +3 hours from the vendor site to the client site. The project started in 2011 and is part of a larger information system web-Gov, which went online by the end of 2012. The web-Gov information system is designed to support the public administration of one capital in the north of Brazil, which has around 420,000 inhabitants. Currently, the web-Gov has been running live for six years and has approximately 1,200 users all municipality employees. However, the system is continuously being expanded, reconfigured, and re-designed. Thus, the web-Gov as an IT-project can be seen as ongoing infrastructure activities, which shapes how the municipality function based upon insights from the users. Furthermore, new functionalities will be made available, so the system is not only used by municipality employees but to serve 190,000 citizens in their interaction with the government.
E-account project
The E-Account is an example of a mixed operation, combining offshoring and outsourcing. While the company BrazilSoft is located in a city in the south of Brazil, the client is 3,573 kilometers to the north of the country. BrazilSoft has an offshoring operation at the client site, with a team composed of five employees, including one operation manager, one project manager, and three developers. Furthermore, there are two BrazilSoft partner-firms in outsourcing operation to support the E-Account. They are responsible for keeping the client-infrastructure and developing the web-Gov web services. The BrazilSoft local team has twenty-five employees, among than manager operation, project managers, developers, and testers. Thus, the BrazilSoft is considered a medium-sized software development company in which connects more than fifty people in the project. The communication among BrazilSoft’s local team, distributed team, and the client is primarily organized in distributed settings supported by chat technology, in particular, eleven Skype chat groups and five WhatsApp groups. Each chat group has a concrete purpose and is related to a specific topic such as technical support, request changes, administration issues, contract terms, and work coordination. The client participates in some groups chat, while others are exclusive to BrazilSoft employees.
Data collection and analysis
Data were collected through interviews and observations of the interaction in the groups chat. We conducted five face-to-face interviews with vendor stakeholders (e.g., directors, project managers, and developers) during May 2017. All interviews were in Portuguese and recorded with the consent of the interviewees. During the interviews, the use of chat group technology kept appearing as critical for the negotiation practices within the team, and we decided to explore this further. In total eleven Skype chat group forums were created by BrazilSoft each aimed at interacting with clients. We obtained permission to participate as ‘observer’ in four Skype chat groups for four months. Thus, we were able to collect the complete interaction in the four groups chat. Our data analysis
was done in two steps. First, we listened, transcribed, and codified the interviews using ‘Express Scribe’. Second, we collected the chats scripts of the observed chat groups, which were then were transported into the Express Scribe for analyzing and codification. Both interviews and chat data were codified identifying themes in the conversation aiming to identify interesting interaction aspects. Through this process, we began to notice how the users were applying the chat technology to support their negotiations. Thus, we decided to focus on the instances of the data, where the client and the vendor were negotiating different aspects of their work such as tickets, releases, bugs, validations, and workflows. In total, we had eighteen pages of chat group transcriptions over ninety days referring to the two most active groups chat (Ticket Chat Group and BrazilSoft Private Chat Group). Table 1 gives and overview of the interaction in the two chat groups.
We observed that the chat groups had fifty-five cases where they negotiated various aspects, and in forty-three of these instances they succeed in reaching an agreement (see Table 1). All these negotiations where done using only chat technology, and no other types of technology such as email or phone were used. Thus, the interesting aspect from our perspective is that the software developers were able to negotiate successfully just only text-based chat, i.e., no video, email, or other types of technology was used. Each of the negotiations demonstrate similar patterns therefore, in the next section, we present our findings focusing on one example, to demonstrate our empirical observation.
<table>
<thead>
<tr>
<th>Items</th>
<th>Ticket Chat Group</th>
<th>Private Chat Group</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>Conversations</td>
<td>504</td>
<td>678</td>
<td>1182</td>
</tr>
<tr>
<td>Observed Negotiations</td>
<td>20</td>
<td>35</td>
<td>55</td>
</tr>
<tr>
<td>Successful Negotiations</td>
<td>14</td>
<td>29</td>
<td>43</td>
</tr>
<tr>
<td>Participants</td>
<td>11</td>
<td>9</td>
<td>20</td>
</tr>
</tbody>
</table>
Table 1 - Negotiations observed in the chat groups
Source: The authors
RESULTS
The web-Gov information system has been in use for six years, and the mixed-vendor/client software team in the E-Account project was created to continually identify and collect new user requirements or bugs in the system, which were to be analyzed and potentially implemented and finally be additional functionality in the production environment. The organization of the work in E-Account is ‘ticket-oriented’, which means that the coordination of activities is structured by tickets. This entails that all new tasks are organized into tickets, which are then prioritized according to the client urgencies. So, the prioritized ticket list is the main coordination tool for the software developers. In order to organize the work, all new user requirements are included into the software management repository called Redmine. Redmine is a web-based open-source software management application designed to coordinate requirements. Thus, each requirement is created as ‘tickets’ into the Redmine repository. The client (the municipality) is responsible for accessing and creating tickets in the Redmine, including describing
each requirement and defining its priority. BrazilSoft’s developers then access the Redmine to identify requirements, while assigning themselves as responsible for particular tickets. The BrazilSoft project manager also access the Redmine on a regular basis monitoring the status of all tickets. When a ticket is done, the developer record this in the Redmine, and the client validates the ticket, and, if approved, inform BrazilSoft developers to include the ticket into the web-Gov production environment. However, over the last years, the ticket quantity has increased considerably, and BrazilSoft have experienced several clients claims regarding delays to include validated tickets into the production environment.
Despite the ticket control embedded in the Redmine, the ticket-oriented process was continually failing, since the client frequently forgot to validate tickets or the vendor forgetting to include them in the production environment. These events increased customer’s complaints regarding unavailable features in the web-Gov system and increased the tension in the vendor-client relationship. Attempting to avoid client claims, 18 months before our research, the vendor introduced chat group technology in order to monitor the ticket-oriented process. The vendor intention was to streamline the coordination of the tickets. Concretely, the vendor notifies the client of the tickets, which require validation before including them in the production environment. The ‘ticket chat group’ was successful in the first three months, however issues began to arise. Communication breakdowns took the form of the client forgetting to report in the ‘ticket chat group’ which tickets were validated and thus ready for inclusion in the web-Gov production environment. Delays became a large problem, and due to the contractual structure between the vendor and client, delayed tickets would mean that the vendor had to pay fines to the client. The increase numbers of fines in the project became a stress-point for the client-vendor relationship and generating conflicts among the cross-organizational team. The conflict was openly available to everybody, since it took place in the ‘ticket chat group’, exposing the problems to all participants. We observed forty-seven messages exchanged in the chat group concerning the issues of delayed validations and fines. Below we zoom in of the core exchanges. Following quotation exemplify the issues between the client and John, the project manager at the vendor site.
Client: “How come that ticket [ID-number] isn’t yet in the production environment.”
John (BrazilSoft site): “Because the ticket has not been validated by you yet.”
Client: “Did you ask me [to validate the ticket] through notification features in the ticket chat group?”
John (BrazilSoft site): “No, I forgot, sorry. But you could look at Redmine. See the ticket [a picture was posted in the chat group]. What you see here is that there is a red alert [see the screen shot]. This red alert means, we are waiting for you to validate the ticket before we can proceed.”
Client: “This practice is not what we agreed on. We decided that our work routine for validation of ticket by us but go through the ticket chat group. You and the others MUST notify me in the request
A few days later, after the discussion above, Peter, a project manager at the client site, suggested internally at a face-to-face meeting with the client to replace the current ticket-oriented process with a release-oriented process. Peter argues that the adoption of the release-oriented process would ‘pack’ a set of tickets into one release and it would facilitate its validation. Consequently, the messages exchanged in the ‘ticket chat group’ regarding ticket validation would also be reduced once a release contains a set of tickets rather than individual tickets. Potentially, conflicts concerning ticket would be avoided. However, such a change would require quite some differences to the way the work was organized, but contractually but also processes oriented. Thus, a longer negotiation concerns the possibility to change the process was initiated. This negotiation took place in two chat groups, and it all began in the ‘ticket chat group’.
Peter (client site): “Hi guys. Yesterday, I had a meeting with the infrastructure team and [client name], where we discussed our workflow. Thus, who will decide what to include in the production environment. The decision is ultimately the system manager [client name]. However, I suggested to update our current work routine replacing ticket-oriented process by a release-oriented process. They liked the idea however, we need to discuss this idea, and how to proceed.”. Ticket chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
What happens in the above quotation is that Peter, one of the core software developers, whose is located at the client site explains, how he has been discussing a potential new way of organizing the workflow in the team. More importantly, he also suggests concrete changes and supports it by referring to that the client has approved of the idea. Consequentially, a team member from the client site also writes a message in the chat group, supporting Peters idea, further demonstrating that the client supports the idea.
Client: “Hello, everyone. As [name of the project manager at client site] wrote, we are excited to adopt the release-oriented process. As far as I know, this will make our validation process much easier. We are looking forward to adopting it.”. Ticket chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
So, we have a situation, where people collocated in the project have had important discussions about the workflow, and moreover, it is important to notice that while both above participants are collocated at the client geographical locations. Indeed, they represent two different parties, namely the client and the vendor. Meanwhile, at the vendor’s geographical location, the idea for change is not fully embraced. However, to have such a discussion internally within the vendor team, before including the client, the project manager created a new discussion forum in ‘BrazilSoft private chat group’ in which John, the software developer in BrazilSoft, but working remotely from the client, resisted the idea of changing the workflow towards release-oriented process.
John (BrazilSoft site): “I’m tough about this situation [replacing ticket-oriented by release-oriented] because we risk increasing our delay
since the client then want to include additional new functionalities for each production release. Currently, the client already delayed their validation of new functionalities, so if we adopt this new process, the delay could increase because they will wait to include a new set of functionalities together in the production environment. Maybe it does not avoid the complaints about the existing delay.”. BrazilSoft private chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
What is interesting here, is that the discussion moved from the ‘ticket chat group’ to the ‘BrazilSoft private chat group’. The private chat forum allowed the vendors to negotiate within BrazilSoft excluding the vendor however still including all BrazilSoft employees also the ones located at the vendor sites. The negotiation continues, and Peter attempts to convince John that the opportunity to move towards a release-oriented process is appropriate and supporting the software developers in BrazilSoft.
Peter (client site): “I agree with you, but if we adopt release-oriented process, everything that is done within the release goes to the production together in a short time. Moreover, we always wanted to adopt release-oriented process. It is great chance for us.”. BrazilSoft private chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
The synchronous interaction continues and John resists Peter’s argument to adopt the release-oriented process. He refers to the timing of the change and how it might drastically change their current workflow causing problems. John explains how such a change is not trivial, but instead involves complex changes to their existing workflow review processes.
John (BrazilSoft site): “I agree, but I think we shouldn’t do this now. I think that is a bad idea because it changes drastically our current workflow routine which demands a review of our work flow.”. BrazilSoft private chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
Following this interaction, multiple different opinions and concerns are presented in the chat group. Evidently, the interaction leads to a conflict within the vendor team between the project managers John and Peter (both working for BrazilSoft, however geographically located at different sites). The main issue is the impact which the potential change will have on their workflow review process. Trying to resolve the issue, BrazilSoft’s operation manager enter the negotiation.
Operation Manager (BrazilSoft site): “Hey guys. Currently, they do this! I think it doesn’t have a significant impact on our current workflow. Just few adjustments.”. BrazilSoft private chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
The operation manager tried to make the issue less controversial. Moreover, another vendor software developer also enters the discussion supporting both the operation manager and argue to make the change in the release process. In this negotiation, it is important that the chat technology allow people to enter the negotiation over time, and thus the ‘BrazilSoft private chat group’ provides a shared context supporting discussion and negotiation across the geographically
dispersed developers. Thus, we now have a case, where people on both geographical sites agree and support the change. However, it is important to notice that John (who still resist the change) is a core employee and his opinion matters, even though other developers approve of the change, as shown below.
Developer (BrazilSoft site): “They’ll continue doing what they always do. I don’t think that is a problem to adopt release-oriented now. It’ll facilitate our work reducing the current validation problems.”. BrazilSoft private group, via Skype, Jun 22th, 2017 (translated from Portuguese)
Furthermore, the operation chat manager also decides to modify his first opinion to support John, by saying that one they have made this change, it will be impossible to return to the previous workflow. Thus, they should be entirely sure that it is a good idea to replace the ticket-oriented process with the release-oriented workflow.
Operation Manager (BrazilSoft site): “What they need to understand is that once adopted release-oriented there is no how to get back. I mean, everything in it must be validated as release. [...] It is because there is no way to separate the codes after being integrated. That would improve our work routine.”. BrazilSoft private chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
Peter, who was the one starting the whole discussion then copy and paste the message from the client, which originally was posted in the other skype group, namely the ‘ticket chat group’. He follows the pastes message by arguing that the ticket-oriented workflow process is currently not working. The issue is that client frequently forgets, which tickets must be validated, thus loses control over the process and everybody gets delayed.
Peter (client site): “I reinforce that a ticket-oriented is not is not good for us because them [client] is not validating each ticket due the high-ticket quantity. Thus, they are forgetting to validate each ticket due it is hard to control. This is the reason why the release-oriented process can figure it out. I’d also like to highlight that at the client meeting, yesterday, everyone [client names] agreed with this change commenting that it can be good for all of us. In addition, in our ticket chat group [client name] wrote: ‘[message pasted from ticket chat group]’. Then, we shouldn’t lose this opportunity to change and improve our process.”. BrazilSoft private chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
The local project manager then agrees to the new process and his colleagues to adopt release-oriented delivery. Nonetheless, he suggested that a workflow process was designed, presented, and approved by all aiming to make clear the new process to the client.
John (BrazilSoft site): “Okay, I agree only if we design a workflow formalizing this process [release-oriented. And they [client] need to approve the workflow proposed by us. The workflow will be our guarantee of the agreement. I can design and send the workflow to them.”
Operation Manager (BrazilSoft site): “I agree.”
Peter (client site): “OK. [emoji with smile]”. BrazilSoft private chat group, via Skype, Jun 22th, 2017 (translated from Portuguese)
The project manager at client site sent a ‘like’ sign in the group in which he agreed with the idea. The next day, the negotiation of the release-oriented versus ticket-oriented work structure moved from the private chat group to ‘public’ ticket chat group in which the project manager at vendor site sent a message accepting the change.
John (BrazilSoft site): “Ok [client] we agree, and we’ll design the release-oriented process in a workflow to be approved for you all. I’ll send the workflow soon.”
Client: “Good news! I’m looking forward to seeing the workflow.”
Peter (client site): “(Y) [Thumb up emoji]”. Ticket chat group, via Skype, Jun 23th, 2017 (translated from Portuguese)
On July 14th, 2017, John shares a document describing the first version of the workflow in BrazilSoft’s private chat group and invites participants to validate it. The operation manager and the administrative manager (who also participates the group) suggested a few adjustments. John sent a second version two hours later, which is approved by all participants in BrazilSoft’s private chat group. Afterwards, the local project manager sent the final version of the workflow in the inclusive ‘ticket chat group’, where the client approved it a few hours later.
Analyzing the negotiation process in the chat groups, we observed that the discussion moved dynamically between the subgroups – from the restricted ‘ticket chat group’ to the inclusive ‘BrazilSoft’s private chat group’. While the negotiations might on the surface seem as a discussion on the work process, it was, in fact, also a demonstration of a power struggle between the two geographically distributed project managers. Once Peter works at the client site on a daily basis, he also took the liberty to suggest a workflow change without consulting with John’s working at the vendor site. Thus, when John first learn that Peter has made a proposal to the client on a drastic change for the client-vendor relationship, without consulting him. So, John became critical and the potential conflict began to arise, which was initiated in the ‘public’ chat group but moved into the private’s vendor chat group. Moreover, even though chat technology is an asynchronous interaction, we perceive in all the messages in the above examples almost went as a synchronous interaction. So, while chat technology fundamentally is an asynchronous technology, it allowed Peter and John to negotiate the workflow changes promptly with the participation of other colleagues, who give their opinions voluntarily. What made the negotiation successful was that the chat technology enabled the participants in both formal and information language exchanges, where they constantly could move between levels of negotiations. In this way, the double-language level (i.e., informal and formal language) allowed the participants to develop a shared context which supported multiple people in navigating across subgroup, language levels, utilizing the permanent record created by the chat technology.
DISCUSSION
From the software vendor perspective, the issue about the workflow change produced a delicate situation. Concretely, Peter, the project manager at the client site, had initiated an unauthorized negotiation with the client, without first checking the vendor’s opinion. Thus, by initiating the negotiation with the client, Peter also impacted the client expectation towards the vendors’ interest in moving towards release-oriented processes. This situation meant that it was important for Peter to convince John that the release-oriented process was the way to go, since if he failed, he would have to face the client and explain how this process change was not good with the risk of losing face.
Luckily for Peter, the software development team managed to successfully negotiate and solve their challenges concerning how to organize the collaborative process, despite being geographically distributed interacting using the online chat group. Prior research has pointed out to how negotiation and miscommunications cannot easily be solved using primarily textual interaction (FORD et al., 2017; TERUI, K.; HISHIYAMA, 2014; BJØRN; HERTZUM, 2006; VALLEY; MOAG; BAZERMAN, 1998), due to the lack of implicit clues and spatial references, which supports the creation of a shared context (HINDS; BAILEY, 2000; SPROULL; KIESLER, 1992). When people are collocated, they are able to use gestures and facial expressions to indicate through feedback loops how they are interpreting the situation supporting negotiations. In this sense, the question becomes what made the negotiation a success despite the lack of feedback and contextual information? How did the chat group technology allow for the distributed software developers to reach an agreement? Our data extend prior CSCW research on negotiation protocols for work (ESBENSEN; BJØRN, 2014) and the use of chat technology in organizations (RIEMER; FRÖSSLER; KLEIN, 2007) in several ways.
Firstly, our data show that the textual and permanent nature of the chat group technology was crucial for supporting the negotiation between the vendor and the client. Prior research on chat technology (JOWETT, 2015; LI; ROSSON, 2014; HSIUNG, 2000, IM; CHEE, 2006) also supports this finding, since they point out that keeping conversation history is an essential feature in chat technology. By saving the complete conversation history it is possible for users to access and analyze prior conversations (VAN DER ZWAARD; BANNINK, 2014) supporting reflective behavior and potential re-submission of past interaction in new conversations. While participants might choose to exclude or delete past messages in certain chat interactions, such action will be registered in the conversation history and made visible available to all participants in the chat group. Analyzing our data, we observed how the vendor’s private chat group made use of the permanent records by copying and pasting previous client-messages from the other chat forum to reinforce argumentation. The permanent record was not only used as a way to review the past interaction, but also to document past behavior facilitating a shared context supporting the negotiations – as in pointing out explicitly what the object of concern entails. By pasting in quotations from earlier, the participants were able to ‘gesture’ and ‘pointing’ towards the area of concern, thus supporting grounding activities (CLARK; BRENNAN, 1991; SEGENREICH, 2008) in the conversation.
Secondly, we found that the synchronous interaction embedded in chat technology supported the negotiation. The participants pointed out that
navigating substantial email conversations is often problematic, and it is difficult to comprehend and follow the different lines of interaction fully. Furthermore, prior work has demonstrated how email technology lack feedback to the sender from the receiver increasing risk of misunderstandings (BJØRN; NGWENYAMA, 2009). For instance, it is not possible to know whether their emails have been seen by others and also whether they are, actually, doing something about it. In the chat technology, you can see whether others have seen the messages and identify who is available, and even more importantly – others can monitor the interaction of others, without interfering directly (TENÓRIO; PINTO; BJØRN, 2018). Thus, the ways in which the chat technology through the permanent features, the informal language, and then also supporting the reviewability of others to monitor the interaction in ‘synchronous’ way of others facilitated the successful negotiation.
Thirdly, our data shows that the chat technology made it possible for the participants to interact informally, compared to their otherwise formal textual interactions in their email use. While the permanent feature of emails requires participants to interact using formal language to ensure accurate interpretation, the permanent features of the chat technology were very different. In the chat technology, participants were allowed to informally interact developing a cultural language (i.e., double-language level) of interaction and interpretation (ROBINSON, 1991), in which ‘items’ of concern were transformed from formal interpretation to a common understanding (OAKLEY, 1999; ROBINSON, 1991). This was evident in the situations, where we saw how the participants did not spend any time nor effort on using formal contextual language in their messages. Instead, participants jump right into the issues of concerns. While the formal communication (e.g., emails) is driven by the highly specific context (LOCKWOOD, 2017), the chat interaction facilitates informal interaction. Thus, chat technology supported the participants in grounding activities in the negotiation. During our interviews, participants mentioned several times that they perceived the chat technology to be fast, which was related to the informal language supporting ‘direct talk’ (HINDS; MORTENSEN, 2005; ROBINSON, 1991). In addition, the participants considered it comfortable to use the chat technology, since it allowed them ‘to query one’s entire team at once’ (HERBSLEB et al., 2002).
Finally, we found that chat technology help to reduce the risk of subgroup dynamics causing faultlines (CRAMTON; HINDS, 2004). When teams are composed of geographically distributed subgroups, where demographic attribute align, there is a risk of creating faultlines complicating collaboration. A risk which is further strengthen in cases, where other types of distinct features confirm the differences across sites, such as like nationality or seniority (MATTHIESEN; BJØRN, 2016). Chat technology made it possible for participants to divide their interaction into parallel groups of interactions, which each created and shaped subgroups in different ways – both across and within geographical locations. In our case, the participants divided their interaction into two main chat groups: the inclusive ‘ticket chat group’ and the excluding ‘BrazilSoft’s private chat group’. However, by having these pre-defined forums, with existing pre-defined participants and purposes, users did not have to consider who to send potential information to each time they were sending a message. They did not risk forgetting to add others or include the wrong audiences for their messages. Instead, the pre-determined nature of participation made it possible for participants to utilize the permanent nature, the informal language, and the reviewability and navigation of the conversations in a fast and informal way, making the negotiation similar as to if the participants had
been collocated. In this way, the chat technology allowed the participants to navigate and organize subgroups, while supporting collaboration across subgroups, thus reducing the risk of faultlines. Table 2 summarizes our relevant findings which supported a successful negotiation within a Brazilian distributed software development team. Those findings answer our research question once chat technology enables textual and permanent nature of the conversation, the synchronous interaction embedded, informal interaction among the participants, and, finally, to reduce the risk of faultlines. Therefore, our findings can drive new the design of cooperative technologies supporting geographically distributed collaboration.
### Table 2 - Chat technology supporting successful negotiation
<table>
<thead>
<tr>
<th>Findings</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Textual and permanent nature</td>
<td>Textual and permanent nature of the chat technology was crucial for supporting the negotiation between the vendor and the client.</td>
</tr>
<tr>
<td>Synchronous interaction</td>
<td>The synchronous interaction embedded in chat technology supported the negotiation.</td>
</tr>
<tr>
<td>Informal language</td>
<td>Chat technology made it possible for the participants informally interaction when compared to their otherwise regular textual interactions, in particular, email.</td>
</tr>
<tr>
<td>Reduce faultlines</td>
<td>Chat technology helps to reduce the risk of subgroup dynamics causing faultlines.</td>
</tr>
</tbody>
</table>
Source: The authors
### CONCLUSION
This study investigated a successful negotiation within a Brazilian distributed software development team using chat technology. We found that the chat technology facilitated negotiation by providing a shared context, synchronous interaction embedded in asynchronous functionality, combined with reviewability supporting navigation by the participants. Analyzing the two chat groups and interviewing their participants, we observed that permanent nature, informal language, navigation, and pre-defined features of subgroups were salient for the success of the negotiation and resolving a potential critical conflict between two core software developers who were geographically distributed. We argue that chat technology has clear strengths in terms of supporting critical interaction within organizations, thus, when we, as CSCW researchers, are to explore and design cooperative technologies supporting geographically distributed collaboration. Therefore, we consider the feature of chat technology and how such features can be generally embedded into the multiple cooperative technologies supporting distributed collaboration both within and outside of the software development domain.
ACKNOWLEDGMENT
We would like to thank Cesumar Institute of Science, Technology, and Innovation (Instituto Cesumar de Ciência, Tecnologia e Inovação – ICETI), Maringá, Paraná, Brazil and MGA Public Management (MGA Gestão Pública Ltda.), Maringá, Paraná, Brazil.
REFERENCES
BJØRN, P.; ESBENSEN, M.; JENSEN, R. E.; MATTHIESEN, S. Does Distance Still Matter? Revisiting the CSCW Fundamentals on
GUO, ZI.; D’AMBRA, J.; TURNER, T.; ZHANG, H. Improving the Effectiveness of Virtual Teams: A Comparison of Video-Conferencing and Face-to-Face Communication in China. IEEE
|
{"Source-Url": "https://static-curis.ku.dk/portal/files/229058987/OA_Revista.pdf", "len_cl100k_base": 10578, "olmocr-version": "0.1.49", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 53469, "total-output-tokens": 15986, "length": "2e13", "weborganizer": {"__label__adult": 0.00032329559326171875, "__label__art_design": 0.0003752708435058594, "__label__crime_law": 0.0003116130828857422, "__label__education_jobs": 0.003040313720703125, "__label__entertainment": 9.697675704956056e-05, "__label__fashion_beauty": 0.00012564659118652344, "__label__finance_business": 0.0014696121215820312, "__label__food_dining": 0.0003294944763183594, "__label__games": 0.000591278076171875, "__label__hardware": 0.0004940032958984375, "__label__health": 0.0004038810729980469, "__label__history": 0.00021386146545410156, "__label__home_hobbies": 8.308887481689453e-05, "__label__industrial": 0.0002887248992919922, "__label__literature": 0.0003514289855957031, "__label__politics": 0.000270843505859375, "__label__religion": 0.0002789497375488281, "__label__science_tech": 0.015472412109375, "__label__social_life": 0.0002524852752685547, "__label__software": 0.02294921875, "__label__software_dev": 0.95166015625, "__label__sports_fitness": 0.00019037723541259768, "__label__transportation": 0.0004038810729980469, "__label__travel": 0.00022614002227783203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65327, 0.03739]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65327, 0.14019]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65327, 0.91114]], "google_gemma-3-12b-it_contains_pii": [[0, 593, false], [593, 2253, null], [2253, 5662, null], [5662, 9128, null], [9128, 12632, null], [12632, 16120, null], [16120, 19619, null], [19619, 22856, null], [22856, 26159, null], [26159, 29449, null], [29449, 32703, null], [32703, 35911, null], [35911, 38953, null], [38953, 42163, null], [42163, 45775, null], [45775, 49764, null], [49764, 52568, null], [52568, 54415, null], [54415, 56229, null], [56229, 58022, null], [58022, 59738, null], [59738, 61509, null], [61509, 63172, null], [63172, 64944, null], [64944, 65327, null]], "google_gemma-3-12b-it_is_public_document": [[0, 593, true], [593, 2253, null], [2253, 5662, null], [5662, 9128, null], [9128, 12632, null], [12632, 16120, null], [16120, 19619, null], [19619, 22856, null], [22856, 26159, null], [26159, 29449, null], [29449, 32703, null], [32703, 35911, null], [35911, 38953, null], [38953, 42163, null], [42163, 45775, null], [45775, 49764, null], [49764, 52568, null], [52568, 54415, null], [54415, 56229, null], [56229, 58022, null], [58022, 59738, null], [59738, 61509, null], [61509, 63172, null], [63172, 64944, null], [64944, 65327, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65327, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65327, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65327, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65327, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65327, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65327, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65327, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65327, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65327, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65327, null]], "pdf_page_numbers": [[0, 593, 1], [593, 2253, 2], [2253, 5662, 3], [5662, 9128, 4], [9128, 12632, 5], [12632, 16120, 6], [16120, 19619, 7], [19619, 22856, 8], [22856, 26159, 9], [26159, 29449, 10], [29449, 32703, 11], [32703, 35911, 12], [35911, 38953, 13], [38953, 42163, 14], [42163, 45775, 15], [45775, 49764, 16], [49764, 52568, 17], [52568, 54415, 18], [54415, 56229, 19], [56229, 58022, 20], [58022, 59738, 21], [59738, 61509, 22], [61509, 63172, 23], [63172, 64944, 24], [64944, 65327, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65327, 0.06742]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
1ede7d62919af88c55e1a1f1ae1697b88a6ec5e1
|
RISK-SIGNIFICANT ADVERSE CONDITION AWARENESS STRENGTHENS ASSURANCE OF FAULT MANAGEMENT SYSTEMS
Rhonda Fitz
MPL Corporation, rhonda.s.fitz@ivv.nasa.gov
ABSTRACT
As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at NASA’s Independent Verification & Validation (IV&V) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASA Office of Safety and Mission Assurance (OSMA) defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domain/component, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IV&V enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing this database for adaptive, risk-informed FM assurance that critical software systems will safely and securely protect against faults and respond to ACs in order to achieve successful missions.
INTRODUCTION
NASA OSMA sponsors the Software Assurance Research Program (SARP) and has funded Fault Management Architectures (FMA) initiatives centered at the IV&V Program out of the Safety and Mission Assurance (SMA) Support Office (SSO) since 2014. Transitioning research products to application for IV&V and Software Assurance (SA) across the Agency supports the goal to advance risk-informed decision making with respect to safety- and mission-critical FM systems. A FMA Technical Reference (TR) Suite and AC Database were deliverables from the FMA research and are to be integrated within the Enterprise Architecture framework established at NASA IV&V. This research has provided the following benefits:
- Improved capability-based assurance from the provision of more comprehensive data
- More rigorous IV&V analysis from identification of off-nominal scenarios
- Increased efficiency of analyst workflow and broader test coverage
- Greater focus on FM and project areas of vulnerability or significant risk
- Support for reliability and resiliency for critical system safety
The approach and preliminary findings from early research were imparted in a Tech Track paper and presentation at the 31st Space Symposium entitled “Fault Management Architectures and the Challenges of Providing Software Assurance”. Additional findings and the description of the continuation of that effort were presented at the 32nd Space Symposium in a Tech Track paper and presentation entitled “Technical Reference Suite Addressing Challenges of Providing Assurance for Fault Management Architectural Design”. Keeping the redundancy to a minimum, the reader is advised to refer to those publications for more in-depth details on the
FMA TR Suite. Beginning with a brief introduction to NASA’s IV&V Program, context is given for assurance of FM systems. IV&V technical standards and a thread approach to performing analysis across the lifecycle are then described. How one project applies assurance based on mission capabilities is explored at a high level, followed by a query into the current view of hazard analysis from IV&V project managers’ perspective. A case is made for the need to better define methodologies for incorporating ACs into assurance processes focused on reducing the risk inherent in complex, safety-critical software systems. A description of the scope of the new SARP initiative and then the AC Database design and implementation details are illustrated with an architectural model and database screenshots. Conclusions are drawn, with a look at progress that is ongoing. The data and product depictions provided in this paper have been condensed in order to avoid including any developer-specific or regulation-controlled information. The FM lexicon used is in agreement with that established in the NASA Fault Management Handbook, which is considered the central authority on FM terminology for the Agency.4
NASA’S INDEPENDENT VERIFICATION AND VALIDATION PROGRAM
NASA’s IV&V Program was founded in 1993 under NASA Office of Safety and Mission Assurance (OSMA) as a direct result of recommendations made by the National Research Council (NRC) and the Report of the Presidential Commission on the Space Shuttle Challenger Accident.5 NASA’s IV&V Program was established as part of an Agency-wide strategy to provide assurance that NASA safety- and mission-critical software will operate reliably, safely, and securely, and to advance systems and software engineering disciplines.
NASA’s IV&V Program has a primary business focus to support NASA missions. The Program takes a systems engineering approach to enable the highest achievable levels of safety and cost-effective IV&V services through the use of broad-based expertise using adaptive engineering best practices and tools. NASA IV&V performs independent testing and analysis throughout the software development lifecycle resulting in objective evidence that provides a level of assurance that system software has been developed in accordance with quality standards, will operate reliably, safely, and securely and that sufficient risk mitigation has been applied to the software that controls and monitors critical NASA systems.
NASA IV&V Technical Framework
The IV&V Technical Framework (TF), IVV 09-1, is a NASA system level procedure that establishes a foundation for a consistent set of methods for providing IV&V technical services to customers, sufficient to ensure safety and risk mitigation for the successful deployment of software-intensive systems. The TF is structured with a set of main objectives that correlate loosely to lifecycle phases, including the verification and validation of:
1.0 Management and Planning
2.0 Concept Documentation
3.0 Requirements
4.0 Test Documentation
5.0 Design
6.0 Implementation
7.0 Operations and Maintenance
One of the key concepts for software assurance is that there are certain perspectives that should be considered during all IV&V analysis, expressed in terms of The Three Questions (3Qs):
Q1 Will the system’s software do what it is supposed to do?
Q2 Will the system’s software not do what it is not supposed to do?
Q3 Will the system’s software respond as expected under adverse conditions?
Additionally, the awareness must be maintained that requirements or any of the objects being assessed cannot be evaluated in isolation and that content under evaluation should always be related back to the acquirer needs and system goals.
Exhibit 1: Adverse conditions illustrated by the decomposition of an off-nominal state associated with a specific behavior within a scenario of a particular mission
**NASA IV&V ANALYSIS THREADS**
With a Capability Development initiative focused on the IV&V of Agile developed projects⁷, these TF goals were extracted and decoupled from traditional waterfall phase dependencies in order to define IV&V analysis activities for incremental software integration and to examine necessary information needed to achieve assurance. In particular, addressing the off-nominal cases described by Q2 and Q3 above, five threads were defined: Hazards, Dependability, Emergent Behavior, Security, and Testing. These threads guide SA analysis activities to ensure that risk-significant ACs get the attention that is warranted to add confidence that mission- and safety-critical capabilities will be achieved as intended and will meet the needs of the system.
Following the **Hazard Thread**, for example, the analyst is tasked to ensure that known software-based hazard causes, contributors, and controls are identified and documented; and to ensure that the software requirements, the design, and the source code provide the capability of controlling identified hazards and do not create hazardous conditions. Note that software architecture, including the FM system architecture, is included in the design analysis.
The **Dependability Thread** partially overlaps these same TF elements as the analyst is tasked to ensure that the software requirements, the design, and the source code meet the dependability and fault tolerance required by the system. The rigor of the FM analysis will help assure the reliability and resiliency of the system.
The **Emergent Behavior Thread** addresses Q2, and is meant to protect from unintended features being introduced as the analyst is tasked to ensure that requirements (parent or child) and design do not introduce capability that is not required; that software architectural and software detailed design choices do not result in unacceptable operational risk; and that the implementation of source code has no emergent behaviors.
The **Security Thread** introduces more deliberate focus in providing information assurance as an important element of FM that has been neglected on spaceflight systems in the past. Objectives include tasks to:
- Ensure that security threats and risks are known, up to date, appropriately documented, and are correct for this mission and that relevant regulatory requirements are identified;
- Ensure that appropriate plans are in place to update the security threats and risks over the course of the development lifecycle to allow for introduction of new or changing threats;
- Ensure the security risks introduced by the system itself, as well as those associated with the environment with which the system interfaces, are appropriately accounted for in the known threats;
- Ensure the system concept from a security perspective and assure that potential security risks with respect to confidentiality (disclosure of sensitive information/data), integrity (modification of information/data), availability (withholding of information or services), and accountability (attributing actions to an individual/process) have been identified;
- Ensure that the requirements address the security threats and risks identified within the system concept specifications and/or the system security concept of operations;
- Ensure that requirements define appropriate security controls to the system, subsystem, according to NASA Procedural Requirement 2810 and driven by the project’s security needs and requirements;
- Ensure that the architecture and detailed design adequately address the identified security requirements both for the system and security risks, including the integration with external components and information and data utilized, stored, and transmitted through the system;
- Ensure that identified security threats and vulnerabilities are prevented, controlled, or mitigated via proposed design components or are documented and addressed as part of the system operations;
- Ensure that the implementation adheres to the system and software design in that it addresses the identified security risks and that the implementation does not introduce new security risks through specific code constructs, features, or coding flaws;
- Ensure that the system and software-required threat controls and safeguards are correctly implemented per proposed design components and validate that they provide the desired levels of protection against threats to the system, or are documented and addressed as part of the system and software operations;
- Ensure the appropriate level of data protection is defined and maintained across all instances and transactions throughout the system and that the security controls are defined to provide comprehensive (end-to-end) protection for the life of the data;
- Ensure that test cases under analysis verify specific security controls (physical, procedural and automated) that cannot be breached leading to compromise of information confidentiality, integrity, or availability;
- Ensure that the integrated system testing covers any areas that may potentially increase the security risk.
The **Test Thread** is even more complex and better described with a diagram as shown in Exhibit 2.
Following these SA analysis threads along with other lessons learned from the NASA Agile Benchmarking report\(^8\) is particularly helpful when providing SA on nontraditional software development projects. The important concept is that the evaluation of ACs is inherent throughout the lifecycle of analysis, not just at the origin with the planning and scoping of the IV&V project\(^7\), or even worse, left to the end with system level integration testing. An adaptive, iterative process to “follow the risk”, shown in Exhibit 3, is established from the beginning with a TR of system understanding, which should evolve as the project matures. The performance of a risk assessment, the design of an assurance strategy balancing rigor with allotted resources and safety considerations, the execution of analysis to capture evidence along with critical assumptions, and finally the articulation of resulting assurance conclusions occurs iteratively at multiple stages in the software lifecycle. Periodic reassessment based on IV&V findings, development project schedule changes, or discovery of additional information should occur as frequently as is practical.
Exhibit 2: How the IV&V Analysis Test Thread crosses all phases of the software development lifecycle, covering nominal and off-nominal behaviors
Exhibit 3: Adaptive, iterative NASA IV&V assurance process to “follow the risk”
The coordination of this workflow process is enabled by several tools incorporated within the Enterprise Architecture at the IV&V Program. The AC Database that is under development will improve the adaptive, risk-informed FM assurance so that critical software systems will safely and securely protect against hazards and faults and will respond to ACs in order to achieve successful missions. Synergies have been established with the NASA IV&V FM Community of Interest, SA working group, SA tools group, SSO, and IV&V analysts. With the support of NASA OSMA SARP and IV&V management, SA analysts will be increasingly able to provide risk-significant AC awareness to enhance capability-based assurance for software systems with increased confidence.
CAPABILITY-BASED ASSURANCE
Capability-based assurance (CBA)\textsuperscript{10} focuses the perspective of IV&V analysis from being based on artifacts or on Computer Software Configuration Items (CSCIs) to a tactic that allows a “follow the risk” approach, leveraging the understanding of technical risk to drive IV&V focus and rigor. Assurance objectives are written in the context of the capabilities to be analyzed, with the goal of providing assurance for mission success. CBA is a framework that enables SA workflow in an adaptive manner, infusing agility in order to accommodate change, with a clear trace to mission objectives and success criteria. It crosses all lifecycle phases, helping to identify IV&V scope and rigor, prioritize and frame analysis, influence static and dynamic test coverage, and more comprehensively communicate findings and assurance conclusions.
When FM systems for NASA projects are being assessed, an integrated system perspective is necessary in order to encompass the many interactions between components, systems, and subsystems within a complex architecture of data and control flow, usually organized within several layers or tiers. There is a deliberate flow-down of mission capabilities through system capabilities and to software capabilities. CBA enables SA professionals to gather broad system knowledge for the purpose of meaningful, risk-based decision making to provide the rigorous assurance necessary for safety and mission-critical NASA FM systems, effectively synchronizing Q1 with Q2 and Q3 considerations. One aspect of CBA may entail modeling systems for a TR that supplies analysts with confidence in the defined approach to understand and decompose risk-significant aspects of the software. These representations should reflect nominal as well as off-nominal scenarios, bringing AC awareness to the forefront in a manner that strengthens SA.
As one example, the Multi-Purpose Crew Vehicle IV&V project team has been instrumental in refining guidance for isolating the driving risks for capabilities, evaluating the role the software entities play, and translating the key factors into an approach that successfully implements CBA within a complex, to-be-human-rated project. The following (nonlinear, iterative) steps were outlined for planning and executing a CBA approach:
1. Develop and model a system understanding linked to capabilities
Nominal and off-nominal behaviors are included
2. Perform a criticality analysis to determine the in-scope capabilities
This is highly dependent on AC awareness and all aspects of FM and hazard analysis
3. Strategize assurance objectives to avoid or mitigate the most significant risks
Prioritizing effort requires balancing available resources
4. Determine the evidence necessary to achieve assurance
IV&V TF threads (described above) prescribe objectives to be met and desired outcomes
5. Perform analysis tasks with identified TF methods and acquire evidence
COMPASS, a catalog of methods, exists with options to tailor to individual projects
6. Perform analysis along IV&V capability threads and acquire evidence
This will be influenced by artifact availability or maturity and should be highly adaptive
7. Consolidate and document evidence
Enterprise Architecture tools are recommended for ease of compatibility
8. Present assurance conclusions
Keep in mind forward work or potential for AC behaviors that may be dynamically tested
The ability to map critical capabilities to ACs or hazard causes that are prevented or mitigated by software controls and verifications is one benefit the AC Database will provide. Also, dependencies or vulnerabilities in capabilities that may indicate missing requirements, weak design, incomplete implementation, or a need for expanded test coverage, either static or dynamic, will become apparent from the AC data accumulated by IV&V projects. The strategy of reducing risk improves the reliability, safety, security and overall quality of NASA missions.
**NASA IV&V HAZARD ANALYSIS**
A questionnaire was formulated and conducted across 15 IV&V projects to ascertain the status of hazard analysis and AC consideration within the IV&V Program. The results are an indicator of the variety of approaches to handling ACs and indicate where some projects might benefit from others’ successes. Further investigation is necessary, but as of now, there is little commonality across projects in the consideration of ACs, and devising a strategy for optimizing assurance of FM by leveraging risk-significant AC awareness is proposed. Six questions were posed to IV&V project managers, and the responses are summarized below:
1. **Are hazards/faults/adverse conditions being considered when performing IV&V analyses?**
Unanimously, all projects were scoped to include FM as an integral part of assurance. Software IV&V necessarily includes a system approach to addressing faults and assessing hazards with the 3Qs mentioned above. Safety-critical aspects of hardware systems often have a software component in monitoring, in communicating status, and in responses or mitigations. This understanding impacts every level of analysis, and the identification of ACs occurs at the outset of every project. In fact, the majority of SSO support has been directly reviewing hazard reports and software safety analyses and actively participating in Safety Technical Review Boards where risks are addressed.
2. **What specific documentation does the IV&V Project have access to that identifies the hazards/faults/adverse conditions?**
Documentation varies from project to project, but generally the artifacts listed as source material for AC awareness include: Concept of Operations, Preliminary (and Final) Hazard Report, System (and Software) Safety Analysis, Portfolio Based Risk Assessment, Risk Based Assessment, Software Hazard List, Failure
3. **Do the IV&V analysts ever generate/brainstorm other hazards/faults/adverse conditions to which the system should be capable of responding?**
The responses to this question varied from ‘occasionally’ to a resounding ‘yes’, with in-depth analysis being done in modeling scenarios, critically assessing capabilities or behaviors, and running independent dynamic tests. This is one key area where the IV&V Program brings forth great value to its customers, by identifying additional ACs that may impede mission success or inhibit safety. By employing critical thinking and by always questioning, ‘Is there something else here that could go wrong?’, looking at timing, looking at state transitions, or looking at multiple, concurrent faults, ACs and potential failure scenarios are more fully anticipated and investigated from a risk perspective. This is where the AC Database comes into play, enabling expertise from other projects to be shared in a manner similar to brainstorming, as queries may be made of projects that have similar characteristics. With their process of assurance often taking into account other project examples, the SSO team alone has submitted over a thousand comments that captured missing information (hazards, causes, controls, and verifications) to the great appreciation of the commercial developers.
4. **How are these hazards/faults/adverse conditions being utilized during the actual analyses?**
The multitude of ways that AC awareness is incorporated into analysis is evidence of the importance of Q2 and Q3 for IV&V. For most projects, an independent list of ACs allows IV&V to provide mission assurance at all phases or for all objectives of analysis: concept, requirements, design, implementation, test, operations and maintenance. Off-nominal conditions are addressed throughout the development lifecycle, ensuring that coverage is complete with respect to safety, security, and dependability. As was described in the IV&V Technical Framework Threads section above, the software requirements/design/implementation must meet the reliability and fault tolerance required by the system, must provide the capability of controlling identified hazards, and must not create hazardous conditions. FM branches across all project domains, and is nearly always in scope for IV&V analysis. SSO support includes insight and oversight activities that focus on crew safety, utilizing hazard analysis activities as the main mechanism for communication of software-related risk. There is, however, wide variance in the usage of ACs for assurance and the need for information and process sharing is evident.
5. **Is there a way in which you capture all of the hazard references you come across and store them for use across future projects?**
At this point, the majority of the hazard considerations and AC lists for IV&V projects is embedded within IV&V work documents (including reports, spreadsheets, flow diagrams, models, databases) and tools meant for tracking issues or risks. The Enterprise Content Management server maintains configuration management and is organized on a project basis, with visibility limited to analysts working on that particular project. The AC Database provides a common, cross-project repository for ACs and corresponding relationships with capabilities, hazard types, risks, etc. and can be used as a valuable resource for current and future projects. Opening up this information promotes increased understanding, classification, and alignment among projects, as all are working toward a common goal of decreasing risk on critical NASA missions.
6. **How do you summarize your assessment of the systems coverage and consideration of hazards/faults/adverse conditions?**
IV&V projects’ assessments range from ‘adequate’ to ‘very good’, based largely on how much experience the Program has with a particular developer. This supports the theory that a void currently exists in the AC knowledge-sharing domain; if best practices are collected along with a searchable database of expected ACs, time savings and increased value will be realized with the assurance provided. Q2 and Q3 analysis will result in a higher level of rigor when capitalizing on the benefits of CBA. Assurance objectives drawn from risk-significant AC awareness will be evident through experience gleaned from the success of other projects. The AC Database facilitates continuous improvement of the SA process.
SARP FM INITIATIVE FOR INTEGRATED ASSURANCE OF FAULT MANAGEMENT
The overriding goal in the SARP FM initiatives is to leverage research results to positively impact the application of SA at NASA IV&V and across the Agency. The transition of products to improved process is occurring with deliberate steps in the provision of the FM Architecture TR Suite and the AC Database. Coordination of efforts to further develop the AC Database tool that was conceptualized and prototyped with earlier initiatives will provide access to analysts within the IV&V Program and in incremental deployment, across the Agency.
Risk-significant AC awareness for FM assurance entails the assessment of how software systems address susceptibility to failure, identifying and mitigating potential risks to software resiliency, and defining an assurance strategy relevant to Q2 and Q3 with preventative, responsive, and adaptive behavior. The success of this transition from SARP research to the realization of an effective analysis method that integrates a tool designed for comprehensive AC awareness within the current framework of analysis tools will effectively add value to the assurance process, particularly with regard to Q3 consideration. Successful technology transfer will be ensured by partnering to create a tool that supports an adaptive, risk-focused process. User stories have been acquired from stakeholders to determine functionality to be provided by the AC Database, and initial datasets from 15 IV&V projects have been compiled. The prototype is an instantiation of working software that should, with minimal effort, be formalized into an enterprise tool available to accommodate analysts’ workflow. As integration occurs, this initiative will be able to provide feedback during deployment for modifications or additional requirements.
In order to proliferate the application and benefits from the research in a logical, prioritized manner, the continuation of this effort is described in four main thrusts:
- **Integration**: Leverage knowledge of the AC Database to inform the developer of use case requirements in order to integrate within the IV&V Program framework
- **Data Population**: Assist projects in further populating the database for more comprehensive query capabilities. Accommodate further categorization of data fields in collaboration with subject matter experts
- **Process Definition**: Draft or adapt current methods for FM analysis using the AC Database for assurance expedience and improved Q3 analysis
- **Dissemination**: Publish products to share knowledge for the advancement of FM assurance across the Agency
**PLAN TO FURTHER THE ADVERSE CONDITION DATABASE DEVELOPMENT**
**Integration**
Capitalizing on prior FM SARP initiatives, the integration of the AC Database and FMA TR suite within the Enterprise Architecture framework will be the successful culmination of the past research efforts. With deployment of this tool and the development of associated guidance, a process to “follow the risk” and accordingly scope assurance efforts is availed. NASA OSMA has promoted the need for identification and test coverage of off-nominal conditions for software systems. Understanding what ACs missions may face, and ensuring they are prevented or addressed is the responsibility of the assurance team, which necessarily should have insight into ACs beyond those defined by the project itself. Earlier research efforts defined terminology, categorized data fields, and designed a baseline repository that centralized and compiled a rudimentary listing of ACs and correlated data relevant to NASA missions. Further development advanced the prototype tool into a working database, designed to improve analysis by informing the creation of a comprehensive AC list, tracking ACs, and allowing queries based on project, mission type, domain/component, causal fault, and other key characteristics. The repository has been architectured, populated with project data, and an interface established for core functionality, including informational search queries, enter, edit, copy, and batch import of ACs. The user interface was designed to improve efficiency for a typical analyst workflow scenario and the underlying architecture provisions for connectivity with other databases and the TR suite in order to correlate information associated with risks, faults,
failures, hazards, and anomalies. This integration effort will encompass informing the developer in the expanded
development and deployment for efficient access by the IV&V and SA community, based on user feedback,
ensuring the expected investment return of value.
Data Population
Previous research efforts collected FM and AC data from 15 NASA IV&V projects, each at a different level of
fidelity. The project datasets are representative of Deep Space Robotic, Human Spaceflight, Earth Orbiter, Launch
Vehicle, and Ground mission software, most often of Classification A. Categorization of AC data and related fields is
an ongoing effort, predicated on use cases and adaptive FM analysis processes. Further refinements are proposed
with the addition of data and entity relationships from IV&V and SSO projects looking at ACs from a hazard analysis
and security perspective for improved query capability. Investigating how a system responds to ACs is an important
aspect of hazard analysis and fault management. As the user population increases and AC Database fields grow,
the benefits increase primarily as a tool for the SMA community to provide assurance, and secondarily as a
mechanism to connect into the knowledge base of related efforts including anomalies, hazards, information
assurance, independent testing, and reliability.
Process Definition
The AC Database enables more effective analysis (Q3 in particular) and provides greater test coverage for
critical missions, helping projects via a risk-informed dynamic look into FM. Formally codifying expectations and
methods for the database will help kick-start its socialization and use among projects. Buy-in from analysts will
lead to additional use case development and new features, including consideration of information assurance,
potentially advancing overall asset protection of flight software systems. Capitalizing on the integrated framework
of the TR and learned expertise in FM analysis and AC use among projects will enable the development of methods
that will be applicable to the assurance of a wide variety of FM architectures and varied development approaches.
Dissemination
Innovative strategies for improving SA methods and tools have been gleaned from earlier initiatives.
Broadening outreach to socialize research outcomes with what is the state of the practice at several NASA centers
is proposed. The publication of research findings and results was tremendously successful with paper
presentations at the 31st and 32nd Space Symposia, published on the NASA Engineering Network and NASA
Technical Report Server, with benefit to the wider FM community as well as the SMA and IV&V teams. Continuing
this approach to disseminate results is proposed, both at technical conferences, as well as on a smaller scale at
several NASA centers. The provision of an integrated AC Database and assurance approaches with SMA personnel
and FM subject matter experts will provide an environment of technical knowledge exchange and form
connections that will improve the state of FM assurance practice.
AC DATABASE ARCHITECTURAL REQUIREMENTS AND DESIGN
A ‘User Story Workshop’ was held to better understand how the IV&V Program could most effectively utilize
meaningful AC data to enhance SA capability. Q3 analysis brings high value to projects from an independent
perspective, focusing on areas of significant risk, and assessing the projects’ attention to off-nominal scenarios.
With this innovation, the objective was to gather requirements to create a database that centralizes a compilation
of adverse conditions and related data from IV&V and SSO projects, and to architect the fields such that there may
be sharing of data between SA projects for more rigorous analysis. The workshop was a forum to acquire theories
of how the Program could use more rigorous AC data and formulate these into ‘user stories’ to inform the
development process. Input was requested from all stakeholders and user groups that recognize that the IV&V
Program as well as the SA community across the Agency will benefit from increased attention to Q3 and the rigorous identification of potential ACs, related mitigations, and verifications for overall CBA.
The format of the ‘user story’ was:
As a <user type>, I want to <meet this goal>, so that <some value is created>
The resulting concepts became the backlog of features that were developed in an agile-like fashion, with weekly demonstrations of working software for peer review and discussion. A sampling of the brainstormed use cases that drove functionality is illustrated in Exhibit 4.
Designing the relational database was done in an incremental fashion as various tables for the SQL database as well as the fields associated with them were defined and refined. The resulting architecture is shown in Exhibit 5. Complete descriptions of all types of data to be found in the fields along with some examples may be referenced in the AC Database user manual14. In the next two sections, the primary table ‘adversecondition’ is described to illustrate the capacity to include multiple fields for data relevant to ACs, and the functionality that is provided in the prototyped AC Database is outlined.
**AC Database Primary Table: ‘adversecondition’**
The ‘adversecondition’ table is used for the primary information about a particular AC. Each AC is linked to at least one mission from the ‘mission’ table. The following fields are included:
1. **AC_IDNum**
- Unique identifier for the adverse condition
- Auto-populated by the database. Primary Key of the table
2. **AC_Identifier**
- Unique identifier for the adverse condition
- Made up of the Mission Name (MPCV, SLS, GPM, etc.) followed by a ‘-’ and the AC_IDNum value
- Example: MPCV-2
- This value should be auto-populated when an adverse condition is created
- When an adverse condition is copied to a particular Mission, this value should be auto-populated
3. **ACName**
- Adverse Condition Name
- Text field with a descriptive name for the adverse condition
4. **Open_ACName**
- Adverse Condition Name that has been scrubbed of any SBU/ITAR information
- Text field with a descriptive name for the adverse condition
5. **Desired_Reaction_System**
- Text field for the system response for when the event of the adverse condition occurs
6. **Desired_Reaction_Software**
- Text field for the software response for when the event of the adverse condition occurs
7. **AC_Likelihood**
- Short text field for the risk likelihood of the AC happening, or the severity of the AC, or the risk to focus on
8. **AC_Result_Timing**
- Text field for specific timing for the adverse condition as to when it could occur (or not occur)
9. **ComponentName**
- Text field for identification of the related component affected by the adverse condition
10. **Component_Description**
- Text field for a detailed description of the related component. The ComponentName offers merely identification for the related component
11. SW_Cause_Indicator
- Indicator field to show if an AC is caused by software
- Ex Y, N
<table>
<thead>
<tr>
<th>User Type</th>
<th>AC Database Goal</th>
<th>Value Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Technical Quality & Excellence (TQ&E) Analyst</td>
<td>To be able to become familiar with the contents of the entire database at a high level. To provide references to my projects.</td>
<td></td>
</tr>
<tr>
<td>User</td>
<td>To know how many times an adverse condition has been found across the projects. To understand how likely the condition is.</td>
<td></td>
</tr>
<tr>
<td>Analyst</td>
<td>To have some ideas for what to have the requirement author to consider adding as a requirement or comment something to do if the requirement fails. Some system requirements state what the system and its components shall successfully do. Sometimes a requirement is written with what shall be done if the requisite action fails (Q2).</td>
<td></td>
</tr>
<tr>
<td>IVV Project Manager</td>
<td>To see a list of all adverse conditions that were or are going to be analyzed on my IVV project or on any specified IVV project. To provide a comprehensive assurance statement.</td>
<td></td>
</tr>
<tr>
<td>Project Lead</td>
<td>To be able to find adverse conditions from similar domains and missions. To plan the analysis activities that will most likely prevent problems from occurring on the mission I'm reviewing.</td>
<td></td>
</tr>
<tr>
<td>Analyst</td>
<td>To search the adverse conditions list by Domain for power management conditions. To assure that batteries can be charged under identified conditions.</td>
<td></td>
</tr>
<tr>
<td>TQ&E Analyst</td>
<td>To filter an adverse conditions list on a project or Domain basis. To determine if an IVV project is adequately covering Q3 conditions within their analysis focus.</td>
<td></td>
</tr>
<tr>
<td>Project Analyst</td>
<td>To be able to search for and rely on consistent terminology in AC descriptions, scoring, and classification. To have confidence that I will see relevant items from other projects and not have to wade through numerous irrelevant ones.</td>
<td></td>
</tr>
<tr>
<td>User</td>
<td>To find correlating AC's. To have a quick reference to similar AC's.</td>
<td></td>
</tr>
<tr>
<td>Project Analyst</td>
<td>To be able to search adverse conditions/hazards for software categories. To find software "caused" AC's. To find software "detected" AC's. To find software "mitigated" AC's.</td>
<td></td>
</tr>
<tr>
<td>Analyst</td>
<td>To have adverse conditions created as a hierarchy of related pairs. To find root cause or expected behavior.</td>
<td></td>
</tr>
<tr>
<td>Information Assurance</td>
<td>To understand the context and origin from which an adverse condition was derived. To help identify similar origins/contexts of interest which may be a "trouble" area.</td>
<td></td>
</tr>
<tr>
<td>Quality Assurance / Metrics Team</td>
<td>To figure out what metrics might be useful to capture. To better capture what might be useful to assist in either helping to capture adverse conditions, etc.</td>
<td></td>
</tr>
<tr>
<td>Project Analyst</td>
<td>To search adverse conditions from a centralized location. To see if my adverse condition is already stored in the database.</td>
<td></td>
</tr>
<tr>
<td>Project Analyst</td>
<td>To read/search adverse conditions from a centralized location. To see if any of the adverse conditions stored in the database are applicable to my project.</td>
<td></td>
</tr>
<tr>
<td>TQ&E Analyst</td>
<td>To search for relevant adverse conditions based on Mission type (launch vehicle, earth orbiter, etc.) or Domain (C&DH, GN&C, EPS, etc.). To support the Assurance Strategy planning of IVV activities or the preparation of heritage reports.</td>
<td></td>
</tr>
<tr>
<td>Project Analyst</td>
<td>To search adverse conditions from a centralized location. To see the proposed methods of resolving this adverse condition.</td>
<td></td>
</tr>
<tr>
<td>Project Analyst</td>
<td>To search adverse conditions from a centralized location. To see if any of the adverse conditions do not have any proposed methods of resolving this adverse condition.</td>
<td></td>
</tr>
<tr>
<td>Project Analyst</td>
<td>To see queries related to Mission type (i.e., science, weather, human exploratory, etc.) and to adverse condition type (i.e., hardware, space related, software, security violation, etc.) To link Mission and adverse condition.</td>
<td></td>
</tr>
<tr>
<td>Project Analyst</td>
<td>To search through Ascent (Mission Phase), Dynamic Separation events (Category Groupings). To gather a set of common causes along with their associated adverse conditions and components.</td>
<td></td>
</tr>
</tbody>
</table>
Exhibit 4: Use cases for desired AC Database functionality
Exhibit 5: AC Database architecture expressed as an Entity-Relationship diagram
12. SW_Cause_Description
• Text field for a detailed description of the related software cause for the AC. The SW_Cause_Indicator offers merely an indication of whether the AC was caused by software
13. SW_Detection_Indicator
• Indicator field to show if the AC is detected by software
- Ex Y, N
14. SW_Detection_Description
• Text field for a detailed description of the related software detection. The SW_Detection_Indicator offers merely an indication of whether the AC can be detected by software
15. SW_Mitigation_Indicator
• Indicator field to show if the AC can be mitigated by software
- Ex Y, N
16. SW_Mitigation_Description
• Text field for a detailed description of the related software mitigation for the AC. The SW_Mitigation_Indicator offers merely an indication of whether the AC can be detected by software
17. SW_Verification_Type
• Text field for the related software verification type
18. AC_Origin
• Text fields for the source or origin for an adverse condition
- Ex: Document detailing test results
19. MissionIDNum
• Unique identifier for the Mission
• Auto-populated by the database
• Foreign key to link the Adversecondition table and the Mission table
20. AC_Failure_Description
• Text field for a detailed description of the Failure Type that is particular to an AC. The Failure_Type table offers merely a general category
21. AC_Hazard_Description
• Text field for a detailed description of the Hazard_Type that is particular to an AC. The Hazard_Type table offers merely a general category
22. AC_Domain_Description
• Text field for a detailed description of the Domain that is particular to an AC. The Domain table offers merely a general category
23. DocumentReferences
• Text field for documentation references that are related to an adverse condition
AC Database Import Data Functionality
The AC DB Import Template file format must be used. This file is in Microsoft Excel format.
- Only data on the first tab of the Microsoft Excel spreadsheet will be imported
- The first row of the spreadsheet must have the names as they are given. Do not change, alter, reorder them
- Column Names: Adverse Condition Name, Open AC Name, Related Capabilities, Related Entities, Mission, Spaceflight Domains, Domain Description, Causal Failures for the AC, Failure Types, Hazard Description, Hazard Types, Spacecraft/Mission Systems Relevant to the AC, Desired System Reaction, Desired Software Reaction, Likelihood, Timing, Software Cause Description, Software Cause Description Indicator, Software Detection Description, Software
The following fields in the template must have values that match data that already exists in the Adverse Condition Database. If the data to be imported does not match existing data in the database, attempted row in the import spreadsheet will error and not be imported into the database. In Developer Mode, new data may be entered to this table to enable the row to be imported.
- **Mission**
- If the Mission has not been added, go to the Add/Edit Mission form to enter the information
- **Domain**
- **Failure Type**
- **Hazard Type**
- **Spacecraft/Mission System Relevant to the AC**
- **Related Capabilities**
- **Related Entities**
The following fields must have a Y, N or blank.
- **Example:** the data may be upper or lower case or blank
- Software Cause Description Indicator
- Software Detection Description Indicator
- Software Mitigation Description Indicator
The following fields must have the delimiter ';' (a semicolon) between multiple entries for the same AC.
- **Example:** the Domain field on the spreadsheet may have more than one and should have each Domain separated by a single semicolon [Guidance Navigation and Control; Propulsion].
- **Related Capabilities**
- **Related Entities**
- **Domain**
- **Hazard Type**
- **Failure Type**
- **Spacecraft/Mission System Relevant to the AC**
To import data:
- Click ‘Import Data’ button on the Search Form screen
- Select the Microsoft Excel import data file that follows the import template
- Import routine will then ensue
- At the end of the import, a dialogue box will appear giving the statistics of the import (# of records parsed, # of records imported, # of records failed)
- If errors are encountered, see the ErrorLog.txt file for the types of error and the ImportErrors.xlsx file for the error data
- The Error files will be located in the same folder as the Microsoft Access database file
- The date in the ImportErrors.xlsx file may be corrected and then used as the import file.
- Prior to import, delete the first column of the spreadsheet (IDCOUNT). The file will then be in the proper format for importing
### Additional AC Database Functionality
Exhibits 6 through 10 illustrate several screen shots of the Microsoft Access user interface for the AC Database. Usability studies were performed with various user groups during the development.
#### Exhibit 6: AC Database Search Form with full query functionality in terms of mission, domain, failure, hazard, etc.
Exhibit 7: AC Database Detail Form for consolidating AC-specific data and relationships to other tables
Exhibit 8: Cloning an AC from one mission to another is accomplished with functionality to duplicate AC records
Exhibit 9: AC Database Mission Form for describing missions along with their capabilities and software entities.
Exhibit 10: Adding new ACs is accomplished with this form for an individual AC or with an import template and script for multiple ACs and their associated fields
CONCLUSION
The strengthening of SA strategies by renewed emphasis on risk-significant AC awareness brings potential for far-reaching impact across the Agency. The complexity of FM and the importance of effectively providing assurance that NASA safety- and mission-critical software will operate reliably, safely, and securely demands rigorous attention to methodologies applied. NASA’s IV&V Program is in a position to leverage technical expertise and broad project experience to improve software assurance strategies. In this arena, IV&V technical standards and a thread approach to performing analysis throughout the software development lifecycle has been documented as a solid approach to CBA. The integrated role of hazard analysis as it supports the “follow the risk” approach enables assurance strategies aimed at critical FM systems necessary for mission success. The current SARP initiative furthering the development of the AC Database is illustrated with design details and rationale for functionality that has been stakeholder-defined and implemented in an incremental fashion. This initiative brings forth value by assessing how software systems address susceptibility to failure, identifying and mitigating potential risks to software resiliency, and defining assurance strategies particularly focused on preventative, responsive, and adaptive behavior in the complex environments in which NASA systems are deployed. As research progresses, the AC Database and supporting assurance methodologies seek to:
- Improve capability-based assurance from the provision of more comprehensive data
- Provide more rigorous IV&V analysis from identification of off-nominal scenarios
- Increase efficiency of analyst workflow and enable broader test coverage
- Allow greater focus on FM and project areas of vulnerability or significant risk
- Deliver support for reliability and resiliency for critical system safety
Continual improvement on SMA is the goal, affording analysts deeper understanding of FM SA strategies, methods, and tools in order to be efficient in providing FM assurance, particularly with regard to addressing risk-significant ACs. Collaboration and infusion of results will continue as the AC Database is deployed to a wider audience and methods are enhanced to take advantage of the tool as a dynamic, living resource tailored to improve workflow in the ultimate goal of reducing risk and increasing confidence in NASA mission success.
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170003137.pdf", "len_cl100k_base": 9688, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 44139, "total-output-tokens": 11293, "length": "2e13", "weborganizer": {"__label__adult": 0.0003752708435058594, "__label__art_design": 0.00051116943359375, "__label__crime_law": 0.0005693435668945312, "__label__education_jobs": 0.0021877288818359375, "__label__entertainment": 0.00011742115020751952, "__label__fashion_beauty": 0.0002191066741943359, "__label__finance_business": 0.0008368492126464844, "__label__food_dining": 0.00036525726318359375, "__label__games": 0.001148223876953125, "__label__hardware": 0.0020084381103515625, "__label__health": 0.0006613731384277344, "__label__history": 0.0005974769592285156, "__label__home_hobbies": 0.0001621246337890625, "__label__industrial": 0.0010013580322265625, "__label__literature": 0.0003573894500732422, "__label__politics": 0.0003132820129394531, "__label__religion": 0.0003809928894042969, "__label__science_tech": 0.2264404296875, "__label__social_life": 0.00014543533325195312, "__label__software": 0.03192138671875, "__label__software_dev": 0.72802734375, "__label__sports_fitness": 0.00032067298889160156, "__label__transportation": 0.001018524169921875, "__label__travel": 0.00026702880859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52458, 0.01349]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52458, 0.28778]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52458, 0.91101]], "google_gemma-3-12b-it_contains_pii": [[0, 4077, false], [4077, 7580, null], [7580, 9555, null], [9555, 14372, null], [14372, 14518, null], [14518, 17251, null], [17251, 21254, null], [21254, 25694, null], [25694, 30077, null], [30077, 34084, null], [34084, 37115, null], [37115, 41155, null], [41155, 41214, null], [41214, 43920, null], [43920, 46032, null], [46032, 46392, null], [46392, 46609, null], [46609, 46722, null], [46722, 46884, null], [46884, 50529, null], [50529, 52458, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4077, true], [4077, 7580, null], [7580, 9555, null], [9555, 14372, null], [14372, 14518, null], [14518, 17251, null], [17251, 21254, null], [21254, 25694, null], [25694, 30077, null], [30077, 34084, null], [34084, 37115, null], [37115, 41155, null], [41155, 41214, null], [41214, 43920, null], [43920, 46032, null], [46032, 46392, null], [46392, 46609, null], [46609, 46722, null], [46722, 46884, null], [46884, 50529, null], [50529, 52458, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52458, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52458, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52458, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52458, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52458, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52458, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52458, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52458, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52458, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52458, null]], "pdf_page_numbers": [[0, 4077, 1], [4077, 7580, 2], [7580, 9555, 3], [9555, 14372, 4], [14372, 14518, 5], [14518, 17251, 6], [17251, 21254, 7], [21254, 25694, 8], [25694, 30077, 9], [30077, 34084, 10], [34084, 37115, 11], [37115, 41155, 12], [41155, 41214, 13], [41214, 43920, 14], [43920, 46032, 15], [46032, 46392, 16], [46392, 46609, 17], [46609, 46722, 18], [46722, 46884, 19], [46884, 50529, 20], [50529, 52458, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52458, 0.07383]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
bed9423de8e00ba2a03e51f1c30a5b40f0905404
|
MySQL Replication Tutorial
Lars Thalmann
Technical lead
Replication, Backup, and Engine Technology
Mats Kindahl
Lead Developer
Replication Technology
MySQL Conference and Expo 2008
Concepts
MySQL Replication
Why?
1. **High Availability**
Possibility of fail-over
2. **Load-balancing/Scale-out**
Query multiple servers
3. **Off-site processing**
Don’t disturb master
How?
Snapshots (Backup)
1. **Client program mysqldump**
With log coordinates
2. **Using backup**
InnoDB, NDB
**Binary log**
1. **Replication**
Asynchronous pushing to slave
2. **Point-in-time recovery**
Roll-forward
**Terminology**
**Master MySQL Server**
- Changes data
- Has binlog turned on
- Pushes binlog events to slave after slave has requested them
**Slave MySQL Server**
- Main control point of replication
- Asks master for replication log
- Gets binlog event from master
**Binary log**
- Log of everything executed
- Divided into transactional components
- Used for replication and point-in-time recovery
**Synchronous replication**
- A transaction is not committed until the data has been replicated (and applied)
- Safer, but slower
- This is available in MySQL Cluster
**Asynchronous replication**
- A transaction is replicated after it has been committed
- Faster, but you can in some cases lose transactions if master fails
- Easy to set up between MySQL servers
Configuring Replication
Required configuration – my.cnf
- Replication Master
log-bin
server_id
- Replication Slave
server_id
Optional items in my.cnf – What to replicate?
- **Replication Master**
- binlog-do-db
- binlog-ignore-db
- **Replication Slave**
- replicate-do-db, replicate-ignore-db
- replicate-do-table, replicate-ignore-table
- replicate-wild-do-table
- replicate-wild-ignore-table
More optional configuration on the slave
- read-only
- log-slave-updates
- skip-slave-start
Configuration – grants on master
GRANT REPLICATION SLAVE on *.*
TO 'rep_user'@'slave-host'
IDENTIFIED BY 'this-is-the-password'
How to deploy replication
Step 1: Make a backup of the master
Either an “offline backup” or an “online backup”...
Configuration – Good advice
- Start the binary log on the master immediately following the backup. *e.g.:
Make the GRANTs on the master server
Shut down mysqld on the master server
Edit my.cnf
Make the backup
Restart mysqld on the master
- Do *not* try to configure master_host, etc. in my.cnf on the slave.
(this is still allowed, but it was always a bad idea)
Restore the backup onto the slave
Configure the slave: part 1
Master
CHANGE MASTER TO
master_host = "dbserv1",
master_user = "rep-user",
master_password = "this-is-the-password";
Slave
Configure the slave: part 2
CHANGE MASTER TO
master_host = "dbmaster.me.com",
master_log_file = "binlog-00001",
master_log_pos = 0;
Start the slave!
START SLAVE;
Replication
Topologies
Master with Slave
Master with Slave
Master
Slave
binary log
TCP connection
Replication is independent of Storage Engines
- You can replicate between any pair of engines
- InnoDB to InnoDB
- MyISAM to MyISAM
- InnoDB to MyISAM
- MEMORY to MyISAM
- etc...
- The binary log is **not** the InnoDB transaction log (or the Falcon log, or ...)
Master with Many Slaves
- Master
- Slave
- Slave
- Slave
- Slave
Chain
log_slave_updates = 1
Chain – Server 2 goes down...
... Server 3 is still up, but out of sync
Each server has a unique “server_id”
... and every event in a binary log file contains the server id number of the server where the event originated.
Ring
server_id=2
Master/Slave
server_id=1
Master/Slave
server_id=3
Master/Slave
The ring topology is not a recommended configuration
Pair of Masters
The pair is a “special case” of the ring topology used for high availability.
The two most common topologies for MySQL Replication
- Master/Master
- Master/Slave
- Slave/Slave
- Master/Slave
- Slave/Slave
- Master/Slave
The "Relay Slave"
The master has to handle only one TCP connection.
log_slave_updates
And now introducing... the blackhole storage engine
engine = blackhole
The relay slave manages replication logs, but not actual data.
Replication Commands
A quick run-through of the commands
**SHOW MASTER STATUS**
- Used on master
- Requires SUPER or REPLICATION CLIENT privileges
- Gives log file and position master is writing to
- Also shows database filters used
```sql
mysql> SHOW MASTER STATUS;
+---------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+---------------+----------+--------------+------------------+
| mysql-bin.003 | 73 | test | manual,mysql |
+---------------+----------+--------------+------------------+
```
SHOW BINARY LOGS
- Used on master
- Requires SUPER privileges
- Will display a list of binary logs on the server
- Use it before using PURGE BINARY LOGS
```sql
mysql> SHOW BINARY LOGS;
+---------------+-----------+
| Log_name | File_size |
+---------------+-----------+
| binlog.000015 | 724935 |
| binlog.000016 | 733481 |
+---------------+-----------+
```
SHOW BINLOG EVENTS
- Used on master
- Requires REPLICATION SLAVE privileges
- Show events in binary log
- Also check `mysqlbinlog` utility
```sql
mysql> SHOW BINLOG EVENTS FROM 390 LIMIT 1\G
*************************** 1. row ***************************
Log_name: slave-bin.000001
Pos: 390
Event_type: Query
Server_id: 2
End_log_pos: 476
Info: use `test`; create table t1 (a int)
1 row in set (0.00 sec)
```
SHOW SLAVE HOSTS
- Used on master
- Requires REPLICATION SLAVE privileges
- Shows list of slaves *currently registered* with the master
- Only slaves started with `report-host` option are visible
```sql
mysql> SHOW SLAVE HOSTS;
+----------------+-----------+------+-----------+
| Server_id | Host | Port | Master_id |
|-----------+------------+------+-----------|
| 2 | 127.0.0.1 | 9308 | 1 |
+-----------+------------+------+-----------+
1 row in set (0.00 sec)
```
PURGE BINARY LOGS
- Used on master
- Requires SUPER privileges
- Removes log files before a certain log file or date
- MASTER can be used in place of BINARY
- Alternative is to use variable EXPIRE_LOGS_DAYS
**SET SQL_LOG_BIN**
- Used on master
- Requires SUPER privileges
- Session variable
- Controls logging to binary log
- Does not work for NDB!
```sql
mysql> SET SQL_LOG_BIN=0;
mysql> INSERT INTO t1 VALUES (1,2,3);
mysql> SET SQL_LOG_BIN=1;
```
SET GLOBAL EXPIRE_LOGS_DAYS
- Used on master
- Require SUPER privileges
- 0 means "never expire"
- Positive value means expire logs after this many days
- Logs will be removed at startup or binary log rotation
- Can be used with running slave
*Logs are removed! Make sure you have backup!*
RESET MASTER
- Used on master
- Requires RELOAD privileges
- *Deletes all binary logs in the index file!*
- Resets binary log index
- Used to get a "clean start"
- *Use with caution! You lose data!*
SHOW SLAVE STATUS
- Used on slave
- Requires SUPER or REPLICATION CLIENT privileges
- Shows some interesting information:
- If the slave threads are running
- What position the I/O thread read last
- What position the SQL thread executed last
- Error message and code, if thread stopped due to an error
SHOW SLAVE STATUS (5.1)
```
mysql> SHOW SLAVE STATUS\G
************************** 1. row **************************
Slave_IO_State: Master_Host: 127.0.0.1
Master_User: root
Master_Port: 10190
Connect_Retry: 1
Master_Log_File: Read_Master_Log_Pos: 4
Relay_Log_File: slave-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: Slave_IO_Running: No
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 0
Relay_Log_Space: 102
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File: Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
1 row in set (0.00 sec)
```
CHANGE MASTER TO
- Used on slave
- Requires SUPER privileges
- Configures the slave server connection to the master
- Slave should not be running
- The user need REPLICATION SLAVE privileges on master
```
CHANGE MASTER TO
MASTER_HOST='adventure.com',
MASTER_USER='dragon',
MASTER_PASSWORD='xyzzy';
```
START SLAVE and STOP SLAVE
- Used on slave
- Used to start or stop the slave threads
- Defaults to affecting both I/O and SQL thread
- ... but individual threads can be started or stopped
START SLAVE SQL_THREAD
START SLAVE IO_THREAD
**RESET SLAVE**
- Used on slave
- Removes all info on replication position
Deletes `master.info`, `relay-log.info` and all relay logs
- *Relay logs are unconditionally removed!*
... even if they have not been fully applied
SET GLOBAL SQL_SLAVE_SKIP_COUNTER
- Used on slave
- Global server variable
- Requires SUPER privileges
- Slave SQL thread shall not be running
- Slave will skip events when starting
- Useful when recovering from slave stops
- Might leave master and slave with different data in tables
... so be careful when you use it
Use Cases
Use Cases, Part 1 – Basic Replication
Intensive Reads
- Master
- Slave
- Slave
- Slave
High Availability
- Master/Slave
- Master/Slave
Presented by MySQL Conference & Expo
“Specialist” slaves – backups and reporting
“Specialist” slaves – per-application
Master
Slave
Slave
Slave
Slave
friends: 10 GB
messages: 30 GB
“friends list” queries
“message board” queries
“Specialist” slaves – Blackhole Engine
- Master
- Slave
- Slave
- Slave
- Slave
“friends list” queries (message table in black hole)
“message board” queries (friends table in black hole)
Things to think about in basic replication
- Initial snapshot of slaves
- Load balancing of clients
- Failover of clients to new master
HA + Scale out?
- Master/Slave
- Master/Slave
- Slave
- Slave
- Slave
- Slave
Any better?
Master/Slave
Master/Slave
Proxy Master
Slave
Slave
Slave
Slave
Problem: slave failover to a new master
- Look at SHOW SLAVE STATUS. This gives the file and position on the failed master.
- “File 34 position 6000” on the failed master may correspond to “File 33 position 22000” on the new master. Find the corresponding file and position.
- CHANGE MASTER TO
master_host = ...
master_log_file = ...
master_log_pos = ...
- START SLAVE
Handling the failover problem
1. Automate it (scripting)
2. Avoid it
Use Cases, Part 2 – HA and Scale Out
Architecture 1: Pair of masters – Active & Standby
- Virtual IP address
- Heartbeat Manager
- Master
- Master
- Shared Disk Array
- Slave
- Slave
Use Cases, Part 2 – HA and Scale Out
2: MySQL Cluster as master, MySQL slaves
Use Cases, Part 2 – HA and Scale Out
3: Master and proxy master are both HA pairs
Use Cases, Part 2 – HA and Scale Out
NDB
- Cluster
- Cluster
4: Replicate from Cluster through HA proxy pair
Blackhole
- Proxy Master
- Proxy Master
- Shared Disk Array
InnoDB
- Slave
- Slave
- Slave
Presented by MySQL Conference & Expo
Application-level partitioning and the Federated Engine
How to JOIN friends table with message table?
“friends list” slaves
“message board” slaves
Application-level partitioning and the Federated Engine
```
CREATE TABLE messages ( id int unsigned ... ) ENGINE=FEDERATED CONNECTION="mysql://feduser:fedpass@message-master/ friendschema/messages";
```
Use Cases, Part 3 – Multiple Data Centers
San Jose
Active Master
Slave
Slave
New York
Master
Slave
Slave
Secure tunnel
Rep
Wr
Wr
Wr
Wr
Рd
Рd
( Jeremy Cole – MySQL Users Conf 2006 )
After Failover
San Jose
Master
Slave
Slave
New York
Active Master
Slave
Slave
Secure tunnel
rep
wr
wr
wr
rd
rd
Presented by
(MySQL Conference & Expo)
( Jeremy Cole – MySQL Users Conf 2006 )
Row-based replication
Row-based replication (MySQL 5.1)
- **Statement-based replication**
Replicate statement doing changes
Requires up-to-date slave
Requires determinism
- **Row-based replication**
Replicate actual row changes
Does not require up-to-date slave
Can handle any statement
Comparison of replication methods
- **Row-based replication**
- Can handle "difficult" statements
- Required by cluster
- **Statement-based replication**
- Sometimes smaller binary log
- Binary log can be used for auditing
Row-based replication features
- Log is idempotent
... provided all tables in log have primary key
- Statement events and row events can be mixed in log
... so format can be switched during run-time
(slave switches automatically as required)
... and even different formats for different threads
Row-based replication as a foundation
- Conflict detection and conflict resolution
- Fine-grained filtering
- NDB Cluster replication
- Multi-channel replication
- Horizontal partitioning
... sending different rows to different slaves
Filtering
- For statement-based replication:
Statements are filtered
Filtering is based on current (used) database
Master filtering are on database only
- For row-based replication:
Rows are filtered
Filtering is based on actual database and table
Master filtering for individual tables possible
... but not implemented
Want both statement and row format?
- Master in STATEMENT mode, slave in ROW mode
- Slave converts statements executed into row format
- Once in row format, it stays in row format
Binary Log
Modes and Formats of the Binary Log
Logging modes
- Three modes: STATEMENT, MIXED, and ROW
- Server variable BINLOG_FORMAT controls mode
- Mode is used to decide logging format for statements
- Logging format is representation of changes
- More about that in just a bit
SET BINLOG_MODE
- `SET BINLOG_FORMAT=mode`
- **Session and global variable**
- **Mode** is one of **STATEMENT, ROW, or MIXED**
- **STATEMENT**: statements are logged in statement format
- **ROW**: statements are logged in row format
- **MIXED (default)**
- Statements are logged in statement format by default
- Statements are logged in row format in some cases
Switching modes
- Mode can be switched at run-time
... even inside a transaction
- Switching mode is *not* allowed:
If session has open temporary tables
From inside stored functions or triggers
If ‘ndb’ is enabled
**MIXED mode**
- Safe statements are usually logged in statement format
- Unsafe statements are logged in row format
- Heuristic decision on what is unsafe, currently:
- Statement containing `UUID()` or calls to UDFs
- Statements updating >1 table with auto-increment columns
- `INSERT DELAYED statements`
- problems with `RAND()` and user-defined variables
**Binary logging formats**
- The *format* tells how changes are stored in log
- Two formats: statement and row
- Formats can be mixed in binary log
```sql
mysql> show binlog events;
+-----------------+-----+-------------+---+----------------------------------------+
| Log_name | Pos | Event_type | … | Info |
+-----------------+-----+-------------+---+----------------------------------------+
| ... | 4 | Format_desc | … | Server ver: 5.1.17-beta-debug-log... |
| ... | 105 | Query | … | use `test`; CREATE TABLE tbl (a INT) |
| ... | 199 | Query | … | use `test`; INSERT INTO tbl VALUES (1) |
| ... | 290 | Table_map | … | table_id: 16 (test.tbl) |
| ... | 331 | Write_rows | … | table_id: 16 flags: STMT_END_F |
+-----------------+-----+-------------+---+----------------------------------------+
5 rows in set (0.00 sec)
```
Statement logging format
- The *statement executed* is logged to the binary log
- Statement logged *after* statement has been executed
- **Pro:**
- Usually smaller binary logs
- Binary log can be used for auditing
- **Cons:**
- Cannot handle partially executed statements
- Cannot handle non-deterministic data
- Does not work with all engines (e.g., NDB)
Row logging format
- The actual rows being changed are logged
- Rows are grouped into events
Pro:
- Can handle non-deterministic statements
- Can handle UDF execution
- Idempotent
Cons:
- No easy way to see what rows are logged
- Does not work with all engines (e.g., blackhole)
Example: multi-table update
- UPDATE t1,t2 SET t1.b = ..., t2.b = ...
mysql> show binlog events from 480;
+-----------------+-----+-----------------+---+----------------------------------------+
| Log_name | Pos | Event_type | ... | Info |
+-----------------+-----+-----------------+---+----------------------------------------+
| ... | 480 | Table_map | ... | table_id: 16 (test.t1) |
| ... | 520 | Table_map | ... | table_id: 17 (test.t2) |
| ... | 560 | Update_rows | ... | table_id: 16 |
| ... | 625 | Update_rows | ... | table_id: 17 flags: STMT_END_F |
+-----------------+-----+-----------------+---+----------------------------------------+
4 rows in set (0.00 sec)
Example: CREATE-SELECT
- CREATE t3 SELECT * FROM t1
```sql
mysql> show binlog events from 690;
+----------+-----+-------------+---+----------------------------------------+
<table>
<thead>
<tr>
<th>Log_name</th>
<th>Pos</th>
<th>Event_type</th>
<th>…</th>
<th>Info</th>
</tr>
</thead>
<tbody>
<tr>
<td>...</td>
<td>480</td>
<td>Table_map</td>
<td>…</td>
<td>use <code>test</code>; CREATE TABLE <code>t3</code> (</td>
</tr>
<tr>
<td>a INT(11) DEFAULT NULL,</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>b INT(11) DEFAULT NULL</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>)</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>...</td>
<td>520</td>
<td>Table_map</td>
<td>…</td>
<td>table_id: 18 (test.t3)</td>
</tr>
<tr>
<td></td>
<td>...</td>
<td>625</td>
<td>Write_rows</td>
<td>…</td>
</tr>
</tbody>
</table>
+----------+-----+-------------+---+----------------------------------------+
3 rows in set (0.00 sec)
```
Special cases
- TRUNCATE vs. DELETE in row mode
TRUNCATE is logged in statement format
DELETE is logged in row format
- GRANT, REVOKE, and SET PASSWORD
These statements changes rows in **mysql tables**: tables_priv, columns_priv, and user
Replicated in statement format
Other statements on these tables are replicated in row format
How objects are logged
- Databases
- Tables
- Views
- Stored procedures
- Stored functions
- Triggers
- Events
- Users
*We are here only considering how these objects are logged when using row mode*
*For statement mode, everything is logged in statement format*
Databases and Tables
- **Database manipulation statements**
- Logged in statement format
- **Table manipulation statements**
- **Statement format:** `CREATE`, `ALTER`, and `DROP`
- **Row format:** `INSERT`, `DELETE`, `UPDATE`, etc.
Views
- CREATE, ALTER, and DROP logged in statement format
- Changes are logged by logging changes to the tables
```sql
mysql> UPDATE living_in SET name='Matz' WHERE name='Mats';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> show binlog events from 1605;
```
<table>
<thead>
<tr>
<th>Log_name</th>
<th>Pos</th>
<th>Event_type</th>
<th>...</th>
<th>Info</th>
</tr>
</thead>
<tbody>
<tr>
<td>mast...</td>
<td>1605</td>
<td>Table_map</td>
<td>...</td>
<td>table_id: 17 (test.names)</td>
</tr>
<tr>
<td>mast...</td>
<td>1648</td>
<td>Update_rows</td>
<td>...</td>
<td>table_id: 17 flags: STMT_END_F</td>
</tr>
</tbody>
</table>
2 rows in set (0.01 sec)
Stored procedures
- CREATE, ALTER, and DROP are replicated in statement format (with a DEFINER)
- CALL is logged in row format by logging all changes done by the call
```sql
mysql> create procedure foo(a int) insert into t1 values(a);
```
```
mysql> show binlog events from 102\G
```
```
*************************** 1. row ***************************
Log_name: master-bin.000001
Pos: 102
Event_type: Query
Server_id: 1
End_log_pos: 244
Info: use `test`; CREATE DEFINER=`root`@`localhost` procedure foo(a int) insert into t1 values(a)
1 row in set (0.00 sec)
```
Stored functions
- **CREATE, ALTER, and DROP** are replicated in statement format (with a **DEFINER**)
- The effects of calling a stored function are logged in row format
```sql
mysql> select a, bar(a) from t2;
mysql> show binlog events from 557;
```
```sql
+----------+-----+------------+-----+--------------------------------+
| Log_name | Pos | Event_type | ... | Info |
+----------+-----+------------+-----+--------------------------------+
| master... | 557 | Table_map | ... | table_id: 18 (test.t1) |
| master... | 596 | Write_rows | ... | table_id: 18 flags: STMT_END_F |
+----------+-----+------------+-----+--------------------------------+
2 rows in set (0.01 sec)
```
Triggers
- **CREATE, ALTER, and DROP** are replicated in statement format (with a **DEFINER**)
- The effects of a trigger are logged in row format
```sql
mysql> insert into t1 values (1,2);
mysql> show binlog events from 780;
```
```
+----------+-----+-------------+---+----------------------------------------+
| Log_name | Pos | Event_type | … | Info |
|----------+-----+-------------+---+----------------------------------------+
| ... | 780 | Table_map | … | table_id: 16 (test.t1) |
| ... | 820 | Table_map | … | table_id: 17 (test.t2) |
| ... | 860 | Write_rows | … | table_id: 16 |
| ... | 925 | Write_rows | … | table_id: 17 flags: STMT_END_F |
|----------+-----+-------------+---+----------------------------------------+
4 rows in set (0.00 sec)
```
Events
- CREATE, ALTER, and DROP are replicated in statement format (with a DEFINER)
- The event is disabled on the slave
- Effects of a event are logged in row format
Implementation
How replication works
MySQL Replication Architecture
MySQL 4.0-5.0
- Application
- Parse/optimize/execute
- MySQL Server
- Master
- SBR
- SE1
- SE2
- Binlog
- Rows
- Storage engine interface
- Storage Engines
- Application
- Statements flushed at commit
- Replication
- MySQL Server
- Slave
- I/O thread
- SQL thread
- Relay Binlog
- SE1
- SE2
- Binlog
- Storage Engines
MySQL Replication Architecture
MySQL 5.1: Row-based replication (RBR)
Row-based Replication
Comparison between SBR and RBR
Advantages of Row-based Replication (RBR)
- Can replicate non-deterministic statements (e.g. UDFs, LOAD_FILE(), UUID(), USER(), FOUND_ROWS())
- Makes it possible to replicate between MySQL Clusters (having multiple MySQL servers or using NDB API)
- Less execution time on slave
- Simple conflict detection (that is currently being extended)
Advantages of Statement-based Replication (SBR)
- Proven technology (since MySQL 3.23)
- Sometimes produces smaller log files
- Binary log can be used for auditing
Four new binlog events
1. Table map event
– Semantics: “This table ID matches this table definition”
2. Write event (After image)
– Semantics: “This row shall exist in slave database”
3. Update event (Before image, After image)
– Semantics: “This row shall be changed in slave database”
4. Delete event (Before image)
– Semantics: “This row shall not exist in the slave database”
Various optimizations:
- Only primary key in before image. Works if table has PK
- Only changed column values in after image. Works if table has PK
Log is *idempotent* if PK exists and there are only RBR events in log. Slave can execute both SBR and RBR events.
Cluster Replication
MySQL Cluster Replication
Local and Global Redundancy
Local Synchronous Replication – two-phase commit
Global Asynchronous Replication
Presented by
MySQL Conference & Expo
O’REILLY
Tools and Techniques
Making a snapshot from a master database
- This is necessary for bringing new slaves online.
- Options:
- Shut down master & take offline backup
- Use "ibbackup" to make an online physical backup
- www.innodb.com
- Use `mysqldump --master-data`
Table Checksums
- How do you know the slave really has the same data as the master?
- Guiseppe Maxia
*Taming the Distributed Data Problem – MySQL Users Conf 2003*
- Baron Schwartz
MySQL Table Checksum
[http://sourceforge.net/projects/mysqltoolkit](http://sourceforge.net/projects/mysqltoolkit)
“Delayed Replication”
- Bruce Dembecki, LiveWorld
*Lessons from an Interactive Environment – MySQL Users Conf 2005*
- Provides hourly log snapshots and protection against “user error” (*e.g.* `DELETE FROM important_table`)
<table>
<thead>
<tr>
<th>Time</th>
<th>3:10</th>
<th>4:00</th>
<th>4:01</th>
<th>4:05</th>
<th>4:10</th>
</tr>
</thead>
<tbody>
<tr>
<td>I/O</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>SQL</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
2:05 to 3:05: Flush logs
3:05 to 4:05: Flush logs
Managing Virtual IP addresses
- For failover and high availability. (Always prefer virtual IP addresses rather than DNS changes)
- **Heartbeat** – [www.linux-ha.org](http://www.linux-ha.org)
also runs on Solaris, BSD, Mac OS X
- Several other software alternatives
Sun Cluster, HP ServiceGuard, etc.
- Or a hardware load balancer
F5 Big IP, Foundry ServerIron, etc.
Shared Storage for Active/Standby pairs
- DRBD
www.drbd.org
- Hardware SAN
- Hardware NAS
NetApp
Tunnels & proxies to use for managing multiple data centers
- Master & slaves can use SSL
- ... or offload the SSL processing to other servers using stunnel
www.stunnel.org
- Proxy writes to masters as in Jeremy Cole’s example
TCP Proxy software
Hardware load balancer
References
- MySQL Manual (http://dev.mysql.com/doc/)
Chapter: Replication
- MySQL Manual (http://dev.mysql.com/doc/)
Chapter: MySQL Cluster Replication
- MySQL Forums (http://forums.mysql.com/)
MySQL Replication forum
- Replication Tricks and Tips
Tuesday 4:25pm
- BOF: Replication
Tuesday evening, first slot (probably 7:30pm)
lars@mysql.com, mats@mysql.com
www.mysql.com
## Common Event Header – 19 bytes
<table>
<thead>
<tr>
<th>Field</th>
<th>Length</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Timestamp</td>
<td>4 bytes</td>
<td>Seconds since 1970</td>
</tr>
<tr>
<td>Type</td>
<td>1 byte</td>
<td>Event type</td>
</tr>
<tr>
<td>Master Id</td>
<td>4 bytes</td>
<td>Server Id of server that created this event</td>
</tr>
<tr>
<td>Total size</td>
<td>4 bytes</td>
<td>Event total size in bytes</td>
</tr>
<tr>
<td>Master position</td>
<td>4 bytes</td>
<td>Position of next event in master binary log</td>
</tr>
<tr>
<td>Flags</td>
<td>2 bytes</td>
<td>Flags for event</td>
</tr>
</tbody>
</table>
### Diagram
```
+----------------+----------------+----------------+
<table>
<thead>
<tr>
<th>time stamp</th>
<th>type</th>
<th>master id</th>
</tr>
</thead>
</table>
+----------------+----------------+----------------|
| total size | master position| flags |
```
Presented by
MySQL Conference & Expo
O’REILLY
Statement-based INSERT 1/2: Query event header
$ mysqlbinlog --hexdump master-bin.000001
# at 235
#060420 20:16:02 server id 1 end_log_pos 351
# Position Timestamp Type Master ID
# 000000eb e2 cf 47 44 02 01 00 00 00 00
# Size Master Pos Flags
# 74 00 00 00 5f 01 00 00 10 00
Statement-based INSERT 2/2: Query event data
$ mysqlbinlog --hexdump master-bin.000001
\n\# 000000fe 02 00 00 00 00 00 00 00
\# 04 00 00 1a 00 00 00 40 ..................|
\# 0000010e 00 00 std|.................std|
\# 0000011e 04 08 test.INSE|
\# 0000012e 52 54 RT.INTO.t1.VALUE|
\# 0000013e 53 20 S...A...B......X|
\# 0000014e 27 2c ...Y......X...X.|
\# 0000015e 29 .|
\# Query thread_id=2 exec_time=0 error_code=0
SET TIMESTAMP=1145556962;
INSERT INTO t1 VALUES ('A','B'), ('X','Y'), ('X','X');
Row-based INSERT 1/2: Table map event
$ mysqlbinlog --hexdump master-bin.000001
# at 235
#060420 20:07:01 server id 1 end_log_pos 275
# Position Timestamp Type Master ID
# 000000eb c5 cd 47 44 13 01 00 00 00
# Size Master Pos Flags
# 28 00 00 00 13 01 00 00 00 00
# 000000fe 0f 00 00 00 00 00 00 00
# 04 74 65 73 74 00 02 74 | ..........test..t|
# 0000010e 31 00 02 fe fe |1....|
# Table_map: `test`.`t1` mapped to number 15
BINLOG 'xc1HRBMBAAAAKAAAAABMBA...3QAAnQxAAL+/g=='
## Row-based INSERT 2/2: Write event
```
$ mysqlbinlog --hexdump master-bin.000001
```
```
# at 275
#060420 20:07:01 server id 1 end_log_pos 319
#
<table>
<thead>
<tr>
<th>#</th>
<th>Position</th>
<th>Timestamp</th>
<th>Type</th>
<th>Master ID</th>
</tr>
</thead>
<tbody>
<tr>
<td>00000113</td>
<td>c5 cd 47 44</td>
<td>14</td>
<td>01 00 00 00 00</td>
<td></td>
</tr>
<tr>
<td>00000126</td>
<td>0f 00 00 00 00 01 00</td>
<td>10 00</td>
<td></td>
<td></td>
</tr>
<tr>
<td>00000136</td>
<td>02 ff f9 01 41 01 42 f9</td>
<td></td>
<td></td>
<td>. . . . . . . . X.Y..X.X</td>
</tr>
<tr>
<td>00000136</td>
<td>01 58 01 59 f9 01 58 01</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>58</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Write_rows: table id 15</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
```
```
BINLOG 'xc1HRBQBAAAALAAAAAD...EBQvkBWAFZ+QFYAVg=';
```
MySQL Cluster Replication
Where to get the log events?
Application
MySQL Server
MySQL Server
MySQL Server
Replication
MySQL Cluster
Application
Application
Application
DB
DB
DB
DB
Application using NDB API
MySQL Cluster Replication
Concurrency control inside master cluster
Application
MySQL Server
TC (DB y)
MySQL Server
TC (DB x)
DB 1
DB 2
Node group 1
DB 3
DB 4
Node group 2
Row-level locking on primary replica
MySQL Cluster Replication
Log shipping inside master cluster
Application
MySQL Server
TC (DB x)
MySQL Server
Application
TC (DB x)
Changed row data
Replication server
Replication
DB 1
DB 2
Node group 1
DB 3
DB 4
Node group 2
Row-level locking on primary replica
Presented by MySQL Conference & Expo
MySQL Replication Architecture
MySQL 5.1
MySQL Server
Master Replication server
MySQL Server
Slave
Application
Injector interface
NDB Injector
Storage Engines
SE1
SE2
Binlog
Replication
I/O thread
SQL thread
Relay Binlog
Storage Engines
SE1
SE2
Binlog
Row-based log from cluster data nodes
Presented by
MySQL Conference & Expo
O’REILLY
MySQL Cluster Replication
Behaves like ordinary MySQL Replication
Local Synchronous Replication – two-phase commit
Global Asynchronous Replication
|
{"Source-Url": "http://assets.en.oreilly.com/1/event/2/MySQL%20Replication%20Tutorial%20Presentation%202.pdf", "len_cl100k_base": 8443, "olmocr-version": "0.1.50", "pdf-total-pages": 114, "total-fallback-pages": 0, "total-input-tokens": 152540, "total-output-tokens": 12622, "length": "2e13", "weborganizer": {"__label__adult": 0.000396728515625, "__label__art_design": 0.00041794776916503906, "__label__crime_law": 0.00037026405334472656, "__label__education_jobs": 0.001422882080078125, "__label__entertainment": 0.00016188621520996094, "__label__fashion_beauty": 0.0001348257064819336, "__label__finance_business": 0.0008993148803710938, "__label__food_dining": 0.0003445148468017578, "__label__games": 0.0011396408081054688, "__label__hardware": 0.0039825439453125, "__label__health": 0.0004100799560546875, "__label__history": 0.0005273818969726562, "__label__home_hobbies": 0.0001900196075439453, "__label__industrial": 0.0011425018310546875, "__label__literature": 0.0002440214157104492, "__label__politics": 0.00023686885833740232, "__label__religion": 0.0004703998565673828, "__label__science_tech": 0.176513671875, "__label__social_life": 0.00011146068572998048, "__label__software": 0.103759765625, "__label__software_dev": 0.7060546875, "__label__sports_fitness": 0.00027751922607421875, "__label__transportation": 0.0003948211669921875, "__label__travel": 0.0002390146255493164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29956, 0.0304]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29956, 0.13867]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29956, 0.67639]], "google_gemma-3-12b-it_contains_pii": [[0, 184, false], [184, 193, null], [193, 617, null], [617, 1020, null], [1020, 1384, null], [1384, 1408, null], [1408, 1526, null], [1526, 1809, null], [1809, 1902, null], [1902, 2031, null], [2031, 2147, null], [2147, 2531, null], [2531, 2565, null], [2565, 2719, null], [2719, 2852, null], [2852, 2883, null], [2883, 2906, null], [2906, 2924, null], [2924, 2985, null], [2985, 3259, null], [3259, 3325, null], [3325, 3354, null], [3354, 3384, null], [3384, 3426, null], [3426, 3577, null], [3577, 3663, null], [3663, 3716, null], [3716, 3811, null], [3811, 3954, null], [3954, 4042, null], [4042, 4178, null], [4178, 4235, null], [4235, 4766, null], [4766, 5137, null], [5137, 5567, null], [5567, 6059, null], [6059, 6267, null], [6267, 6512, null], [6512, 6804, null], [6804, 7004, null], [7004, 7316, null], [7316, 8237, null], [8237, 8553, null], [8553, 8788, null], [8788, 9022, null], [9022, 9343, null], [9343, 9353, null], [9353, 9530, null], [9530, 9574, null], [9574, 9729, null], [9729, 9919, null], [9919, 10056, null], [10056, 10135, null], [10135, 10217, null], [10217, 10597, null], [10597, 10667, null], [10667, 10852, null], [10852, 10931, null], [10931, 11014, null], [11014, 11256, null], [11256, 11406, null], [11406, 11622, null], [11622, 11820, null], [11820, 12028, null], [12028, 12050, null], [12050, 12329, null], [12329, 12562, null], [12562, 12867, null], [12867, 13105, null], [13105, 13450, null], [13450, 13631, null], [13631, 13678, null], [13678, 13917, null], [13917, 14295, null], [14295, 14519, null], [14519, 14886, null], [14886, 15718, null], [15718, 16085, null], [16085, 16367, null], [16367, 17179, null], [17179, 17839, null], [17839, 18190, null], [18190, 18455, null], [18455, 18695, null], [18695, 19249, null], [19249, 19841, null], [19841, 20558, null], [20558, 21442, null], [21442, 21613, null], [21613, 21650, null], [21650, 22033, null], [22033, 22103, null], [22103, 22663, null], [22663, 23323, null], [23323, 23343, null], [23343, 23528, null], [23528, 23549, null], [23549, 23805, null], [23805, 24108, null], [24108, 24567, null], [24567, 24943, null], [24943, 25051, null], [25051, 25337, null], [25337, 25726, null], [25726, 26830, null], [26830, 27156, null], [27156, 27657, null], [27657, 28176, null], [28176, 28731, null], [28731, 28938, null], [28938, 29154, null], [29154, 29460, null], [29460, 29808, null], [29808, 29956, null]], "google_gemma-3-12b-it_is_public_document": [[0, 184, true], [184, 193, null], [193, 617, null], [617, 1020, null], [1020, 1384, null], [1384, 1408, null], [1408, 1526, null], [1526, 1809, null], [1809, 1902, null], [1902, 2031, null], [2031, 2147, null], [2147, 2531, null], [2531, 2565, null], [2565, 2719, null], [2719, 2852, null], [2852, 2883, null], [2883, 2906, null], [2906, 2924, null], [2924, 2985, null], [2985, 3259, null], [3259, 3325, null], [3325, 3354, null], [3354, 3384, null], [3384, 3426, null], [3426, 3577, null], [3577, 3663, null], [3663, 3716, null], [3716, 3811, null], [3811, 3954, null], [3954, 4042, null], [4042, 4178, null], [4178, 4235, null], [4235, 4766, null], [4766, 5137, null], [5137, 5567, null], [5567, 6059, null], [6059, 6267, null], [6267, 6512, null], [6512, 6804, null], [6804, 7004, null], [7004, 7316, null], [7316, 8237, null], [8237, 8553, null], [8553, 8788, null], [8788, 9022, null], [9022, 9343, null], [9343, 9353, null], [9353, 9530, null], [9530, 9574, null], [9574, 9729, null], [9729, 9919, null], [9919, 10056, null], [10056, 10135, null], [10135, 10217, null], [10217, 10597, null], [10597, 10667, null], [10667, 10852, null], [10852, 10931, null], [10931, 11014, null], [11014, 11256, null], [11256, 11406, null], [11406, 11622, null], [11622, 11820, null], [11820, 12028, null], [12028, 12050, null], [12050, 12329, null], [12329, 12562, null], [12562, 12867, null], [12867, 13105, null], [13105, 13450, null], [13450, 13631, null], [13631, 13678, null], [13678, 13917, null], [13917, 14295, null], [14295, 14519, null], [14519, 14886, null], [14886, 15718, null], [15718, 16085, null], [16085, 16367, null], [16367, 17179, null], [17179, 17839, null], [17839, 18190, null], [18190, 18455, null], [18455, 18695, null], [18695, 19249, null], [19249, 19841, null], [19841, 20558, null], [20558, 21442, null], [21442, 21613, null], [21613, 21650, null], [21650, 22033, null], [22033, 22103, null], [22103, 22663, null], [22663, 23323, null], [23323, 23343, null], [23343, 23528, null], [23528, 23549, null], [23549, 23805, null], [23805, 24108, null], [24108, 24567, null], [24567, 24943, null], [24943, 25051, null], [25051, 25337, null], [25337, 25726, null], [25726, 26830, null], [26830, 27156, null], [27156, 27657, null], [27657, 28176, null], [28176, 28731, null], [28731, 28938, null], [28938, 29154, null], [29154, 29460, null], [29460, 29808, null], [29808, 29956, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 29956, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29956, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29956, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29956, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29956, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29956, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29956, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29956, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29956, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29956, null]], "pdf_page_numbers": [[0, 184, 1], [184, 193, 2], [193, 617, 3], [617, 1020, 4], [1020, 1384, 5], [1384, 1408, 6], [1408, 1526, 7], [1526, 1809, 8], [1809, 1902, 9], [1902, 2031, 10], [2031, 2147, 11], [2147, 2531, 12], [2531, 2565, 13], [2565, 2719, 14], [2719, 2852, 15], [2852, 2883, 16], [2883, 2906, 17], [2906, 2924, 18], [2924, 2985, 19], [2985, 3259, 20], [3259, 3325, 21], [3325, 3354, 22], [3354, 3384, 23], [3384, 3426, 24], [3426, 3577, 25], [3577, 3663, 26], [3663, 3716, 27], [3716, 3811, 28], [3811, 3954, 29], [3954, 4042, 30], [4042, 4178, 31], [4178, 4235, 32], [4235, 4766, 33], [4766, 5137, 34], [5137, 5567, 35], [5567, 6059, 36], [6059, 6267, 37], [6267, 6512, 38], [6512, 6804, 39], [6804, 7004, 40], [7004, 7316, 41], [7316, 8237, 42], [8237, 8553, 43], [8553, 8788, 44], [8788, 9022, 45], [9022, 9343, 46], [9343, 9353, 47], [9353, 9530, 48], [9530, 9574, 49], [9574, 9729, 50], [9729, 9919, 51], [9919, 10056, 52], [10056, 10135, 53], [10135, 10217, 54], [10217, 10597, 55], [10597, 10667, 56], [10667, 10852, 57], [10852, 10931, 58], [10931, 11014, 59], [11014, 11256, 60], [11256, 11406, 61], [11406, 11622, 62], [11622, 11820, 63], [11820, 12028, 64], [12028, 12050, 65], [12050, 12329, 66], [12329, 12562, 67], [12562, 12867, 68], [12867, 13105, 69], [13105, 13450, 70], [13450, 13631, 71], [13631, 13678, 72], [13678, 13917, 73], [13917, 14295, 74], [14295, 14519, 75], [14519, 14886, 76], [14886, 15718, 77], [15718, 16085, 78], [16085, 16367, 79], [16367, 17179, 80], [17179, 17839, 81], [17839, 18190, 82], [18190, 18455, 83], [18455, 18695, 84], [18695, 19249, 85], [19249, 19841, 86], [19841, 20558, 87], [20558, 21442, 88], [21442, 21613, 89], [21613, 21650, 90], [21650, 22033, 91], [22033, 22103, 92], [22103, 22663, 93], [22663, 23323, 94], [23323, 23343, 95], [23343, 23528, 96], [23528, 23549, 97], [23549, 23805, 98], [23805, 24108, 99], [24108, 24567, 100], [24567, 24943, 101], [24943, 25051, 102], [25051, 25337, 103], [25337, 25726, 104], [25726, 26830, 105], [26830, 27156, 106], [27156, 27657, 107], [27657, 28176, 108], [28176, 28731, 109], [28731, 28938, 110], [28938, 29154, 111], [29154, 29460, 112], [29460, 29808, 113], [29808, 29956, 114]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29956, 0.06425]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
93b5e03bed55bb4b92e3971a7ca1b0eac1854e27
|
Predicting Software Faults in Large Space Systems using Machine Learning Techniques
Bhekisipho Twala
University of Johannesburg, Johannesburg, South Africa
E-mail: btwala@uj.ac.za
ABSTRACT
Recently, the use of machine learning (ML) algorithms has proven to be of great practical value in solving a variety of engineering problems including the prediction of failure, fault, and defect-proneness as the space system software becomes complex. One of the most active areas of recent research in ML has been the use of ensemble classifiers. How ML techniques (or classifiers) could be used to predict software faults in space systems, including many aerospace systems is shown, and further use ensemble individual classifiers by having them vote for the most popular class to improve system software fault-proneness prediction. Benchmarking results on four NASA public datasets show the Naive Bayes classifier as more robust software fault prediction while most ensembles with a decision tree classifier as one of its components achieve higher accuracy rates.
Keywords: Software metrics, machine learning, classifiers, ensemble, fault-proneness prediction
1. INTRODUCTION
The growing demand for higher operational efficiency and safety in industrial processes has resulted in a huge interest in fault-detection techniques. Engineering researchers and practitioners remain concerned with accurate prediction when building classifiers. However, software fault or defect prediction remains the most popular research area. Software fault prediction has both safety and economic benefits in technical systems by preventing future failures and further improves process maintenance schedules. Lack of adequate tools to estimate and evaluate the cost for a software system failure is one of the main challenges in systems engineering. They used datasets of past projects to build and validate estimation or prediction systems of software development efforts, for example, which allows them to make management decisions, such as resource allocation. Or they may use datasets of measurements describing software systems to validate metrics predicting quality attributes. The techniques they used to build models to measure or predict, on the other hand, often requires good quality of data.
When using machine learning (ML) techniques to build such prediction systems, poor data quality in either training or test (unknown) set or in both sets, can affect prediction accuracy. Various ML techniques (e.g., supervised learning) have been used in systems engineering to predict faults or defects, software (project) development effort, software quality, and software defects. Reviews of the use of ML in software engineering report that ML in software engineering is a mature technique based on widely-available tools using well understood algorithms. The decision tree (DT) classifier is an example of a ML algorithm that can be used for predicting continuous attributes (regression) or categorical attributes (classification). Thus, software prediction can be cast as a supervised learning problem, i.e., the process of learning to separate samples from different classes by finding common features between samples of known classes. An important advantage of ML over statistically-based approaches as a modelling technique lies in the fact that the interpretation of, say, decision rules, is more straightforward and intelligible to human beings than, say, principal component analysis (a statistical tool for finding patterns in data of high dimension). In recent years, there has been an explosion of papers in the ML and statistics communities discussing how to combine models or model predictions.
One of the major problems for applying ML algorithms in fault or failure prediction is the (sometimes) unavailability and scarcity of software data, i.e., data for training the model. Most of the companies that own (or control) do not share their failure data from space systems due to corporate propriety interests and national security concerns so that a useful database with a great amount of data cannot be developed. Most techniques for predicting attributes of a large space system require past data from which models will be constructed and validated. Often data is collected either with no specific purpose in mind (i.e., it is collected because it might be useful in future) or the analysis being carried out has a different goal than that for which the data was originally collected. The relevance of this issue is strictly proportional to the dimensionality of the collected data. Thus, accurate prediction of software faults remains a priority among empirical engineering researchers.
Many works in both the ML and statistical pattern recognition communities have shown that combining...
(ensemble) individual classifiers is an effective technique for improving classification accuracy. An ensemble is generated by training multiple learners for the same task and then combining their predictions. There are different ways in which ensembles can be generated, and the resulting output combined to classify new instances. The popular approaches to creating ensembles include changing the instances used for training through techniques such as bagging, boosting, changing the features used in training, and introducing randomness in the classifier itself. The interpretability of classifiers can produce useful information for experts responsible for making reliable classification, making decision trees an attractive scheme.
Few studies have been carried out on the effect of the top classifiers in data mining. Wu et al. studies that the effect of ensemble classifiers on software faults predicting accuracy in space systems are rare in engineering.
2. MACHINE LEARNING ALGORITHMS
In supervised learning, especially for multivariate data, a classification function \( y = f(x) \) from training instances of the form \( \{ (x_1, y_1), \ldots, (x_m, y_m) \} \), predicts one (or more) output attribute(s) or dependent variable(s) given the values of the input attributes of the form \( \{ x, f(x) \} \). The \( x \) values are vectors of the form \( \{ x_1, \ldots, x_m \} \) whose components can be numerically ordered, nominal or categorical, or ordinal. The \( y \) values are drawn from a discrete set of classes \( \{1, \ldots, K\} \) in the case of classification. Depending on the usage, the prediction can be definite or probabilistic over possible values.
Given a set of training examples and any given prior probabilities and misclassification costs, a learning algorithm outputs a classifier. The classifier is a hypothesis about the true classification function that is learned from or fitted to training data. The classifier is then tested on test data. A wide range of algorithms, in both classical statistics and from various ML paradigms, have been developed for this task of supervised learning classification.
2.1 Apriori
The Apriori is a classical data mining algorithm used for learning association rules. It calculates rules that express probabilistic relationships between items in frequent item-sets. For example, a rule derived from frequent item-sets containing \( A, B, \) and \( C \) might state that if \( A \) and \( B \) are included in a transaction, then \( C \) is likely to be included.
An association rule states that an item or group of items implies the presence of another item with some probability. For an example, a rule like: If a customer buys wine and bread, he/she often buys cheese, too. It expresses an association between (sets of) items, which may be products of a supermarket or a mail-order company; special equipment options of a car; optional services offered by telecommunication companies, etc. An association rule states that if we pick a customer at random and find out that he/she selected certain items (bought certain products, chose certain options, etc.), we can be confident, quantified by a percentage, that he/she also selected certain other items (bought certain other products, chose certain other options, etc.). Of course, they do not want just any association rules; they want good rules, rules that are expressive and reliable. The standard measures to assess association rules are the support and the confidence of a rule, both of which are computed from the support of certain item-sets. Unlike decision tree rules (described in Section 4.2), which predict a target, association rules simply express a correlation.
2.2 Decision Trees
Decision tree (DT) induction is one of the simplest and yet one of the most successful forms of supervised learning algorithm. It has been extensively pursued and studied in many areas such as statistics, and ML for the purposes of classification and prediction.
Decision tree classifiers have four major objectives, these are:
(i) To classify correctly as much of the training sample as possible.
(ii) Generalise beyond the training sample so that unseen samples could be classified with as high accuracy as possible.
(iii) Be easy to update as more training samples become available (i.e., be incremental), and
(iv) Have as simple a structure as possible.
Objective (i) is actually highly debatable and not all tree classifiers are concerned with objective (iii).
Decision trees are non parametric (i.e., no assumptions about the data are made) and a useful means of representing the logic embodied in software routines. A decision tree takes as input a case or example described by a set of attribute values, and outputs a Boolean or multi-valued decision. For the purpose of this paper, we shall stick to the Boolean case.
A classification tree (which is what will be covered in this paper) as opposed to a regression tree means that the response variable is qualitative rather than quantitative. In the classification case, when the response variable takes value in a set of previously-defined classes, the node is assigned to the class which represents the highest proportion of observations. Whereas, in the regression case, the value assigned to cases in a given terminal node is the mean of the response variable values associated with the observations belonging to a given node. Note that in both cases, this assignment is probabilistic, in the sense that a measure of error is associated with it.
2.3 k-Nearest Neighbour
One of the most venerable algorithms in ML is the nearest neighbour (NN). Nearest-neighbour methods are sometimes referred to as memory-based reasoning or instance-based learning (IBL) or case-based learning (CBL) techniques and have been used for classification tasks. They essentially work by assigning to an unclassified sample point the classification of the nearest of a set of previously-classified points.
The entire training set is stored in the memory. To classify a new instance, the Euclidean distance (possibly weighted) is computed between the instance and each stored training instance and the new instance is assigned the class of the nearest neighbouring instance. More generally, the \( k \)-nearest neighbours are computed, and the new instance is assigned
of the class that is most frequent among the \( k \) neighbours (which from now onwards shall be denoted as \( k\)-NN). IBL’s have three defining general characteristics; a similarity function (how close together the two instances are), a typical instance selection function (which instances to keep as examples), and a classification function (deciding how a new case relates to the learned cases).
A further non-parametric procedure of this form is the \( k\)-NN approach. To classify an unknown pattern, the \( k\)-NN approach looks at a collection of the \( k \) nearest points and uses a voting mechanism to select between them, instead of looking at the single nearest point and classifying according to that with ties broken at random. If there are ties for the \( k \)th nearest observations, all candidates are included in the vote.
2.4 Naïve Bayes Classifier
There are two roles for Bayesian methods: (i) to provide practical learning algorithms such as Naïve Bayes learning and Bayesian belief network learning by combining prior knowledge with observed data and (ii) to provide a useful conceptual framework that could provide a gold standard for evaluating other learning algorithms.
Bayesian learning algorithms use probability theory as an approach to concept classification. Bayesian classifiers produce probabilities for (possibly multiple) class assignments, rather than a single definite classification. Bayesian learning should not be confused with the Bayes optimal classifier. Also, Bayesian learning should not be confused with the Naïve Bayes or idiot’s Bayes classifier, which assumes that the inputs are conditionally independent given the target class.
The naïve Bayes classifier is usually applied with categorical inputs, and the distribution of each input is estimated by the proportions in the training set; hence the naïve Bayes classifier (NBC) is a frequentist method.
The NBC is perhaps the simplest and most widely studied probabilistic learning method. It learns from the training data, the conditional probability of each attribute \( A_i \) given the class label \( C \). The strong major assumption is that all attributes \( A_i \) are independent given the value of the class \( C \). Classification is therefore done applying Bayes rule to compute the probability of \( C \) given \( A_{1},...,A_{n} \) and then predicting the class with the highest posterior probability. The assumption of conditional independence of a collection of random attributes is critical. Otherwise, it would be impossible to estimate all the parameters without such an assumption.
2.5 Support Vector Machines
Support vector machines (SVMs) are pattern classifiers that can be expressed in the form of hyperplanes to discriminate positive instances from negative instances pioneered by Vapnik.\textsuperscript{39} Motivated by statistical learning theory, SVMs have successfully been applied to numerical tasks, including classification. These can perform both binary classification (pattern recognition) and real-valued function approximation (regression estimation) tasks. These perform structural risk minimisation (also used in other systems such as neuro-fuzzy) and identify key support vectors (the points closest to the boundary). Risk minimisation measures the expected error on an arbitrarily large test set with the given training set and supports vector machines non-linearly map their \( n \)-dimensional input space into a high dimensional feature space. In this high dimensional feature space, a nonlinear classifier is constructed.
Support vector machines have been recently proposed as a new technique for pattern recognition. Intuitively, given a set of points which belong to either of the two classes, a linear SVM finds the hyperplane leaving the largest possible fraction of points of the same class on the same side, while maximising the distance of either class from the hyperplane. The hyperplane is determined by a subset of the points of the two classes, named support vectors, and has a number of interesting theoretical properties.
2.6 Ensemble Procedure
A motivation for ensemble is that a combination of outputs of many weak classifiers produces powerful ensembles with higher accuracy than a single classifier obtained from the same sample. The ensemble then makes use of all data available and utilises a systematic pattern of classification results. The generalised ensemble algorithm is summarised in Fig. 1, with the overall six-stage scheme of the technique shown in Fig. 2 and described in more detail in the following sub-sections.
<table>
<thead>
<tr>
<th>Step</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.</td>
<td>Partition original dataset into ( n ) training datasets, ( TR1, TR2, ..., TRn ).</td>
</tr>
<tr>
<td>2.</td>
<td>Construct ( n ) classifiers (CF1, CF2, ..., CFn) with the different training datasets ( TR1, TR2, ..., TRn ) to obtain ( n ) individual classifiers (ensemble members) generated by different machine learning algorithms, thus different.</td>
</tr>
<tr>
<td>3.</td>
<td>Select ( m ) de-correlated classifiers using de-correlation maximisation algorithm.</td>
</tr>
<tr>
<td>4.</td>
<td>Using Eqn. (3), obtain ( m ) classifier output values (misclassification error rates) of unknown instance.</td>
</tr>
<tr>
<td>5.</td>
<td>Transform the output value to reliability degrees of positive class and negative class, given the imbalance of some datasets</td>
</tr>
<tr>
<td>6.</td>
<td>Fuse the multiple CFs into aggregate output in terms of majority voting. When there is a tie in the predicted probabilities, choose the class with the highest probability, or else, use a random choice when the probabilities between the two methods are equal.</td>
</tr>
</tbody>
</table>
Figure 1. The ensemble algorithm.
2.6.1 Partitioning Original Dataset
Due to the shortage in some data analysis problems, such approaches, such as bagging\textsuperscript{6,12} have been used for creating samples varying the data subsets selected. The bagging algorithm is very efficient in constructing a reasonable size of training set due to the feature of its random sampling with replacement. Therefore such a strategy (which we also use in this study) is a useful data preparation method for ML.
2.6.2 Creating Diverse Classifiers
Diversity in ensemble of learnt models constitute one
of the main current directions in ML and data mining. It has been shown theoretically and experimentally that in order for an ensemble to be effective, it should consist of high-accuracy base classifiers that should have high diversity in their predictions. One technique, which has been proved to be effective for constructing an ensemble of accurate and diverse base classifiers, is to use different training data, or so-called ensemble training data selection. Many ensemble training data selection strategies generate multiple classifiers by applying a single learning algorithm, to different versions of a given dataset. Two different methods for manipulating the dataset are normally used: (i) random sampling with replacement (also called bootstrap sampling) in bagging, and (ii) re-weighting of the misclassified training instances in boosting. The authors have used bagging in this study.
2.6.3 Selecting Appropriate Ensemble Members
After training, each individual classifier grown has generated its own result. However, if there is a large number of individual members (i.e., classifiers), one needs to select a subset of representatives in order to improve ensemble efficiency. Furthermore, it does have to follow the rule ‘the more the better’ rule as mentioned by some researchers. In this study, a de-correlation maximisation method\textsuperscript{17,18} is used to select the appropriate number of ensemble members. The idea is that the correlations between the selected classifiers should be as small as possible. The de-correlation matrix can be summarised in the following steps:
1. Compute the variance-covariance matrix and the correlation matrix;
2. For the $i^{th}$ classifier ($i = 1, 2, \ldots, p$), calculate the plural-correlation coefficient $\eta_i$;
3. For a pre-specified threshold $\theta$, if $\eta_i < \theta$, then the $i^{th}$ classifier should be deleted from the $p$ classifiers. Conversely, if $\eta_i > \theta$, then the $i^{th}$ classifier should be retained;
4. For the retained classifiers, perform Eqns (1)-(3) procedures iteratively until satisfactory results are obtained.
2.6.4 Performance Measure Evaluation
In the previous phase, the classifier outputs are used as performance evaluation measures (in terms of misclassification error rates). It has often been argued that selecting and evaluating a classification model based solely on its error rates is inappropriate. The argument is based on the issue of using both the false positive (rejecting a null hypothesis when it is actually true) and false negative (failing to reject a null hypothesis when it is in fact false) errors as performance measures whenever classification models are used and compared. Furthermore, in the business world, decisions (of the classification type) involve costs and expected profits. The classifier is then expected to help making the decisions that will maximise profits.
For example, predicting faults or failures or defects of software systems involves two types of errors: (i) predicting software faults as likely to be high when in fact these are low, and (ii) predicting software faults as likely to be low when in fact these are high. Now, mere misclassification rate is simply not good enough to predict software effort. To overcome this problem and further make allowances for the inequality of mislabelled classes, variable misclassification costs are incorporated in our attribute selection criterion via prior specification for all our experiments. This also solves the imbalanced data problem. Details about how misclassification costs are used for both splitting and pruning rules are presented by Breiman\textsuperscript{19}, \textit{et al.}
2.6.5 Integrating Multiple Classifiers into Ensemble Output
Depending upon the work in the previous stages, a set
of appropriate number of ensemble members can be identified. The subsequent task is to combine these selected members into an aggregated classifier in an appropriate ensemble strategy. Common strategies to combine these single DT results and then produce the final output are simple averaging; weighted averaging, ranking and majority voting.
- **Simple averaging** is one of the most frequently used combination methods. After training the members of the ensemble, the final output can be obtained by averaging the sum of each output of the ensemble members. Some experiments have shown that simple averaging is an effective approach
- **Weighted averaging** is where the final ensemble result is calculated based on individual ensemble members’ performances and a weight attached to each individual member’s output. The gross weight is 1 and each member of an ensemble is entitled to a portion of this gross weight according to its performance or diversity.
- **Ranking** is where members of the ensemble are called low-level classifiers and they produce not only a single result but a list of choices ranked according to their likelihood. Then the high-level classifier chooses from these set of classes using additional information that is not usually available to or well represented in a single low-level classifier.
- **Majority voting** is the most popular combination method for classification problems because of its easy implementation. Members of trees voting decide the value of each output dimension. It takes over half the ensemble to agree a result for it to be accepted as the final output of the ensemble (regardless of the diversity and accuracy of each tree generalisation). Majority voting ignores the fact that some trees that lie in a minority sometimes do produce correct results. However, this is the combination strategy approach which has been followed in this study.
### 3. RELATED WORK
Significant advances have been made in the past few decades regarding methodologies which handle the prediction of software faults, failures, and defects. Unfortunately, these methodologies are often not available to many researchers for a variety of reasons (for example, lack of familiarity, computational challenges). As a result, researchers often resort to adhoc approaches in dealing with the software faults. Nonetheless, several researchers have still examined various statistically-based and machine learning approaches to solve the problem in engineering. Specific results are discussed.
Marwala and Hunt use vibration data to identify faults in structures. The idea is to use a committee of neural networks which employ both frequency response functions and modal data simultaneously in order to identify faults in structures. Their proposed approach is tested on simulated data and it shows very promising results. Marwala follows up this approach by proposing a Bayesian formulated neural network approach using hybrid Monte Carlo to scaled conjugate gradient method for indentifying faults in structures using modal properties and the coordinate models assurance criterion and vibration data. Marwala argues that such an approach gives identities of damage and their respective standard deviations. The proposed approach gives more accurate fault identification results than the results given using the existing approaches although the proposed approach was found to be computationally expensive.
An anomaly detection method for spacecraft system based on association rules is proposed by Yairi et al. The method constructs a system behaviour model in the form of a set of rules by applying pattern clustering and association rule mining using past housekeeping data of engineering test satellite and time series data. The proposed approach has the advantage of not requiring prior knowledge on the system. Thus, it can be applied to various kinds of spacecrafts at small costs. The association-rules-based approach compares favourably with existing data mining methods but has the slight advantage of detecting some anomalies which (otherwise) could have been overlooked by conventional approaches.
Tree-based models were used by Koru and Tian to investigate the relationship between high defect and high complexity software modules in six large-scale products of IBM and Nortel Networks. The study was conducted on 15 method-level metrics for IBM products and 45 method-level products for Nortel Networks. They provided evidence of high defect-prone modules as generally complex modules and also locate below the most complex modules.
The pioneering work by Guo et al. performed a comprehensive simulation study to evaluate three techniques in the context of software faults modelling using NASA’s KC2 project dataset. These techniques are Bayesian networks (BN), logistic regression (LR) and discriminant analysis (DA). Their results show BN as not only being more robust to software faults prediction but also as a more cost-effective strategy. Their results further show LR as a better method for choosing the best software metrics that are more prone to faults.
Menzies et al. performed a simulation study comparing two machine learning methods for software detection prediction with the probability of detection and the probability of false alarm as their performance evaluation metrics. Based on the National Aeronautics and Space Administration (NASA) dataset, their results showed the NBC performing better than the J48 algorithm. Menzie followed up this work by carrying out a comparative evaluation (in terms of software fault prediction) between linear regression, trees, ROCKY and Delphi detectors. NASA datasets were used for this task. Their results showed ROCKY performing better than the other methods.
Fujimaki et al. propose novel anomaly detection method for a spacecraft system that is based on kernel feature (attribute) space and directional distribution which constructs a system behaviour model using telemetry data obtained from a simulator of an orbital transfer vehicle designed to make a rendezvous manoeuvre with the internal space station. The effectiveness of the method shows promising results.
---
* For more information on these strategies, the reader is referred to Dietterich, which are otherwise briefly discussed below.
Another comparative study of J48, KStar, artificial neural networks (ANN), Bayesian networks, and SVM in the context of software fault estimation was carried out by Koru and Liu\textsuperscript{29} who suggested using fault predictors on large components. The simulation study was carried out using public NASA datasets for both method and class-levels, with the F-ratio as the performance measure. Their results show J48 exhibiting higher accuracy rates while methods such as BN, ANN and SVM struggled. In addition, most of the methods did not perform well whenever there were small components in the datasets. In their next study, Koru and Liu\textsuperscript{30}, built prediction models using J48, KStar and random forests on NASA datasets. Both method-level and class-level metrics were used. They showed how class-level metrics improved model performance. Detection of faults was also shown to be at class-level instead of model level.
The use of ML for the purposes of predicting or estimating a software module’s fault-proneness is proposed by Gondra\textsuperscript{30} who views fault-proneness as both a continuous measure and a binary classification task. An ANN is used to predict the continuous measure with SVM used for the classification task. A comparative study of the effectiveness of both methods was then performed on a NASA public dataset. The experimental results confirmed the superior performance of SVMs over ANNs.
The performance of LR, DA, DT, boosting, kernel density, NBC, J48, IBk, voted perceptron, VF1, hyper-pipes and random forest techniques was analysed by Ma\textsuperscript{31}, et al. using NASA datasets with g-mean1, g-mean2 and F-ratio as performance evaluation metrics. Their results shows balanced random forests yielding good results (especially on large datasets) with boosting, rule-set and DTs being less robust methods for software fault prediction.
Challagulla\textsuperscript{32}, et al. evaluated memory-based reasoning (MBR) as a strategy for predicting software faults using NASA datasets and 21 method-level metrics. The probability of detection, the probability of false alarm and accuracy were used as performance evaluation metrics. Their results show promise, especially when a framework based on MBR configuration is used.
LR, NBC, random forests, and k-NN with generalisation techniques were evaluated using NASA’s KC1 datasets in the software fault context by Zhou and Leung\textsuperscript{33}. Their results showed better performances by methods for low-severity faults compared to high-severity faults. They concluded that other Chidamber-Kemerer metrics could also be useful for fault prediction.
The use of the NBC and the J48 algorithm for predicting software faults on a NASA dataset using method-level metrics was investigated by Menzies\textsuperscript{34} et al. The performance evaluation metrics used were the probability of faults and the probability of defects. Their results showed NBC as more efficient with the dataset characteristics playing a major role in terms of performance for both algorithms.
Catal and Diri\textsuperscript{35} evaluate the impact of random forests and algorithms based on artificial immune systems with the area under the receiver operating characteristics (ROC) curve used as a performance evaluation measure. NASA datasets were utilised for this task. Their results show random forests achieving the highest accuracy rates with other notably good performances by methods such as NBC (especially for small datasets). Mendes and Koschke\textsuperscript{36} evaluated several data mining algorithms based on fault prediction using 13 NASA datasets. Using the area under the ROC curve as the performance measure, they could not find any statistically significant difference in terms of performance among the algorithms.
According to the above studies, it appears that there is currently reasonable data to model software fault prediction although the use of public datasets by researchers. Secondly, method-level metrics appear less to be dominant in software fault prediction with class-level metrics hardly utilised. Also, ML algorithms are still the most popular methods compared to either the statistical methods or the expert opinion-based approaches. However, among some of the ML algorithms, the results are not so clear, especially for large amounts of data. It also appears that the poor and good performances of each algorithm are highly dependent on each respective dataset characteristics. In other words, the nature of the attributes determines the applicability of fault-detection techniques.
4. EXPERIMENTAL SET-UP
4.1 Experiment I
To empirically evaluate the performance of one of the top five classifiers in data mining (AR, DT, k-NN, NBC and SVM), an experiment was conducted on four datasets in terms of misclassification error rate. For each dataset, different types of metrics are used to predict modules that were likely to harbour faults. In other words, to carry out the experiments, they relate individual requirements with software modules.
Three of the four datasets were collected by the NASA metrics data program (MDP) data repository (http://mdp.ivv.nasa.gov). They comprised three projects (CM1, JM1 and PC1) of which only partial requirement metrics are available. There are 10 attributes that described the requirements. By definition, CM1 described software artifacts of a NASA spacecraft instrument; JM1 represented a real-time ground system that uses simulations to generate flight predictions; PC1 refers to an Earth orbiting satellite software system. A combination of the requirement metric and static code metrics were used in their experiments with CM1 eventually having 266 instances; JM1 having 97 instances; and PC1 having 477 instances.
The other dataset was drawn from an institutional database of anomaly reports for multiple missions managed for NASA by the NASA’s Jet Propulsion Laboratory (JPL/NASA)\textsuperscript{37}. The reporting mechanism for the anomalies is a report form called incident/surprise/anomaly (ISA). Incident/surprise/anomaly is written whenever the behaviour of the system differs from the expected behaviour as judged by the operator. The dataset consisted of 199 critical ISAs from seven NASA spacecraft that occurred between the launch date of each spacecraft and 21 August 2001.
To perform the experiment, each complete dataset was split randomly into 5 parts (Part I, Part II, Part III, Part IV, Part V) of equal (or approximately equal) sizes. Five-fold
cross validation was used for the experiment. For each fold, four of the parts of the instances in each category were placed in the training set, and the remaining one was placed in the corresponding test as shown in Table 1.
<table>
<thead>
<tr>
<th>Table 1. Partitioning of dataset to training and test sets</th>
</tr>
</thead>
<tbody>
<tr>
<td>Training set</td>
</tr>
<tr>
<td>Fold 1</td>
</tr>
<tr>
<td>Fold 2</td>
</tr>
<tr>
<td>Fold 3</td>
</tr>
<tr>
<td>Fold 4</td>
</tr>
<tr>
<td>Fold 5</td>
</tr>
</tbody>
</table>
They construct the predictive models using five classifiers from the WEKA toolkit36. The WEKA is an ensemble of tools for data classification, regression, clustering, association rules, and visualisation. The toolkit is developed in Java and it is an open source software. All the five classifiers are used with their default settings (and in some cases, control parameters) as implemented in WEKA. Furthermore, both the WEKA library and the NASA datasets are publicly available. Thus, their results can be easily checked and reproduced.
To measure the performance of these classifiers, the training set-test methodology is employed, i.e., each classifier is built on the training data and the predicted accuracy is measured by the smoothed error rate estimated on the test data. Instead of summing terms that are either 0 or 1 as in the error-count estimator, the smoothed error rate uses a continuum of values between 0 and 1 in terms that are summed. The resulting estimator has a smaller variance than the error-count estimate. Also, the smoothed error rate can be very helpful when there is a tie between two competing classes.
Another point to note is the reason for using difference in error rates when making comparisons between the classifiers instead of, say, division or ratios of error rates. First, differences are natural and on understandable scale in this context, that is, people would understand a per cent point worsening in error rate to mean a simple addition of per cent. Secondly, ratios of error rates would lead to statements like A increases error rate by per cent which would be misinterpreted as meaning a per cent difference in error rate. Finally, the analysis of variance assumes the error rates to be on an additive rather than multiplicative scale.
All statistical tests were conducted using the MINITAB statistical software program37. Analyses of variance, using the general linear model (GLM) procedure38 were used to examine the main effects and their respective interactions. This was done using a repeated measures design (where the effect was tested against its interaction with datasets). The fixed effect factor is the five classifiers. The four datasets were used to estimate the smoothed error rate. The results were averaged across five folds of the cross-validation process before carrying out the statistical analysis. The averaging was done as a reduction in error variance benefit.
4.1.1 Results
Experimental results on the effects of five classifiers on software faults predictive accuracy have been described. The error rates for each classifier as a method for handling software faults were initially averaged over the four datasets. Then results on the effects of five classifiers for individual datasets were presented. Also, all the error rates increase over the complete data case are formed by taking differences.
From Fig. 3, NBC is the overall winner as a fault prediction technique with an excess error rate of 22.4 per cent, followed by DT, SVM and k-NN, with excess error rates of 27.3 per cent, 29.3 per cent and 30.0 per cent, respectively. The worst technique is AR, which exhibits an error rate of 32.1 per cent.
Figure 3. Performance of classifiers.
Tukey’s multiple comparison tests showed significant differences between most of the classifiers (with the exception of SVM and k-NN). The significance level for all the comparison tests was 0.05.
The results for the performances of the five classifiers for individual datasets has been given in Fig. 4. From Fig. 4, the following results are observed.
- For the PC1 data problem, it appears that NBC predicts software faults better than the other methods with an accuracy rate of 78.7 per cent followed by DT (75.3 per cent), SVM (73.3 per cent), k-NN (71.7 per cent) and AR (67.1 per cent). Multiple comparison tests showed significant differences in performances amongst all the methods at the 5 per cent level.
- The results for the JM1 data problem shows NBC (once again) exhibiting the highest accuracy rate (76.6 per cent), followed by SVM (72.9 per cent), k-NN (70.7 per cent) and DT (68.0 per cent). The worst performance was by AR with accuracy rate of 65.7 per cent. Once again, multiple comparison tests showed the five methods as significantly different from each other at the 5 per cent level.
- The results presented for the CM1 data problem shows the NBC achieving an accuracy rate of 74.9 per cent, followed by DT (70.9 per cent), SVM (69.9 per cent), k-NN (66.8 per cent) and AR (64.4 per cent). Four of the five classifiers achieve a bigger error rates for this
kind of dataset compared with rates achieved for the other datasets. No significant difference in performance between DT and SVM was observed at the 5 per cent level.
- For the JP/NASA data problem, four of the classifiers achieved the smallest error rates compared to rates obtained from across the other three datasets. In fact, the average error rate for all the classifiers for this kind of dataset is 26.1 per cent. SVM is the only classifier that achieved the highest error rate (33.4 per cent) for this kind of dataset across the other three datasets (with error rates of 26.7 per cent for CM1, 27.1 per cent for JM1, and 30.1 per cent for PC1).
4.2 Experiment II
Experiment II is similar to that described in the previous experimental section. Hence, detailed experimental methods are not included but only a subset of the experiment is given. The main objective of this experiment is to compare the performance of different ensembles for software fault prediction. The results for each classifier are used as a baseline. The correlation maximisation method was used to select the appropriate number of ensemble classifier members, of which three classifiers per ensemble were chosen. For each ensemble, four sampling procedures (bagging, boosting, feature selection, and randomization) were considered. This is the case for each individual dataset.
The ensembles (ES1 to ES10) that consists of three individual classifiers are given as: ES1 (AR, DT, k-NN); ES2 (AR, DT, NBC); ES3 (AR, DT, SVM); ES4 (AR, k-NN, NBC); ES5(AR, k-NN, SVM); ES6(DT, k-NN, NBC); ES7(DT, k-NN, SVM); ES8(DT, NBC, SVM); ES9(k-NN, NBC, SVM); ES10 (k-NN, NBC, SVM).
4.2.1 Results Main Effects
All the main effects were found to be significant at the 5 per cent level of significance (F = 87.3, df = 9 for ensembles methods; F = 17.4, df = 3 for sampling procedures; p < 0.05 for each).
Figure 5 summarises the error rates for 10 classifier ensembles on software fault-proneness prediction. The behaviour of these ensemble methods was explored under varying sampling procedures. The error rates of each ensemble were averaged over the four datasets. From the results it follows that the ensemble of AR, DT, and k-NN achieved the highest accuracy rates (especially when bagging was used as a sampling procedure). The worst performance was by ES9 (with feature selection) whose components are k-NN, NBC and SVM. All the ensembles performed badly when randomization was used as a sampling procedure.
For the CM1 data problem, Fig. 6 shows that the ensembles have on an average the best accuracy throughout the sampling procedure spectrum compared to individual classifiers. Also, it appears that most of the ensembles, with DT as one of its components achieve higher accuracy rates. Most of the ensembles achieved higher accuracy when boosting or bagging was used as a sampling procedure.
Figure 7 shows a comparison of results for the JM1 data problem which show the error rates of the 10 ensembles as a function of predictive accuracy. Randomisation systems that combine outputs from models constructed using AR and DT (on the one hand) and either k-NN or SVM or NBC (on
can be very valuable to engineers, especially those dealing with software development processes. This is important for minimising cost and improving effectiveness of the software testing process. The major contributions have been the application of the top five machine learning algorithms to predict software faults in space systems and further use multiple classifier learning to improve software faults predictive accuracy. Four NASA public datasets were utilised for this task. The results suggest that the ML algorithms can be successfully applied in software faults prediction with multiple classifier learning providing overall significant increases in classification performance.
Based on evidence, it has been found that most of the ensembles improve the prediction accuracy of the baseline classifiers (AR, DT, k-NN, NBC and SVM) with the ensembles that have AR and DT as their components performing well. This improvement is achieved mainly in all the datasets for both bagging and boosting. Surprisingly, most of the ensembles with NBC as one of its components did not perform as good as when NBC was just a single classifier. Also, the overall performance of feature selection for all the ensembles was very poor. This was the case for all the datasets. Individually, NBC is the most effective classifier for all the datasets. The performance of DT is equally good, especially for the bigger datasets. SVM is more effective for small datasets while AR is a poor performer for any type of dataset. An important question is why does NBC outperform the other classifiers by such significant margins? One reason could be the level of inertia displayed by each classifier.
The results using ensemble methods for the JPL/NASA data problem (Fig. 9) are nearly identical to those observed for PC1 data. However, the performances of all the ensemble methods improve for this kind of dataset with error rates as small as 16.7 per cent (for ES1 when bagging was used) observed. ES10 using randomisation exhibits the worst performance with an error rate of 27.3 per cent.
One surprising result is the poor performance of ES10, especially when boosting was used. This was not the case for the other datasets where boosting always yielded good results.
5. DISCUSSION AND CONCLUSIONS
Accurate prediction of software faults in space systems can be very valuable to engineers, especially those dealing with software development processes. This is important for minimising cost and improving effectiveness of the software testing process. The major contributions have been the application of one of the top five machine learning algorithms to predict software faults in space systems and further use multiple classifier learning to improve software faults predictive accuracy. Four NASA public datasets were utilised for this task. The results suggest that the ML algorithms can be successfully applied in software faults prediction with multiple classifier learning providing overall significant increases in classification performance.
Based on evidence, it has been found that most of the ensembles improve the prediction accuracy of the baseline classifiers (AR, DT, k-NN, NBC and SVM) with the ensembles that have AR and DT as their components performing well. This improvement is achieved mainly in all the datasets for both bagging and boosting. Surprisingly, most of the ensembles with NBC as one of its components did not perform as good as when NBC was just a single classifier. Also, the overall performance of feature selection for all the ensembles was very poor. This was the case for all the datasets. Individually, NBC is the most effective classifier for all the datasets. The performance of DT is equally good, especially for the bigger datasets. SVM is more effective for small datasets while AR is a poor performer for any type of dataset. An important question is why does NBC outperform the other classifiers by such significant margins? One reason could be the level of inertia displayed by each classifier.
From both experiments, there exists threats to the validity of the results. All the four datasets were obtained from NASA, hence, their conclusions could be biased. One such threat was duplicates in some of the instances. These were deleted from the analysis. The second threat was the amount of missing values in the dataset, of which multiple imputation was used to deal with them. This was the case for all the four datasets, hence, in part, a time-consuming exercise. Help was also sought from the domain experts to give reasons why some attributes values could be missing? In addition, clarification on unclear descriptions on some of the software modules was also sought from the project personnel.
The performances of ensemble imputation methods in terms of smoothed misclassification error rate were observed. A natural extension would be to consider the impact of such ensemble methods on other measures of performance, and in particular, measures of group separation such as GINI or the magnitude of relative error that is also commonly used to assess classifier performances in the SE industry.
The ensembles require further investigation on a number of fronts, for example, in terms of training parameters and the combination rules that can be employed. Also, empirical studies of the application of the ensembles to datasets from other areas of data mining should be undertaken to assess their performance across a more general field.
ACKNOWLEDGEMENTS
This work was funded by Department of Electrical Engineering and Electronic Engineering Science at the University of Johannesburg, South Africa. The comments and suggestions from my colleagues and the anonymous reviewers greatly improved this paper. The author would also like to thank NASA for making the datasets available.
REFERENCES
Contributor
Prof (Dr) Bhekisipho Twala received his BA (Economics and Statistics) from University of Swaziland, in 1993; MSc(Computational Statistics) from Southampton University, in 1995, and PhD (Machine Learning and Statistics) from the Open University, in 2005. Currently working as a Professor for Artificial Intelligence and Statistical Science in the Department of Electrical and Electronic Engineering Science, University of Johannesburg, South Africa. Currently, he is involved in developing novel and innovative solutions (using AI technologies) to key research problems in the field of electrical and electronic engineering science. His broad research interests include multivariate statistics, classification methods, knowledge discovery and reasoning with uncertainty, sensor data fusion and inference, and the interface between statistics and computing.
|
{"Source-Url": "https://ujcontent.uj.ac.za/vital/access/services/Download/uj:5314/CONTENT1", "len_cl100k_base": 9748, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 35206, "total-output-tokens": 12458, "length": "2e13", "weborganizer": {"__label__adult": 0.0003485679626464844, "__label__art_design": 0.00044465065002441406, "__label__crime_law": 0.0003943443298339844, "__label__education_jobs": 0.0020999908447265625, "__label__entertainment": 0.00010704994201660156, "__label__fashion_beauty": 0.0001825094223022461, "__label__finance_business": 0.0003247261047363281, "__label__food_dining": 0.00039505958557128906, "__label__games": 0.0007982254028320312, "__label__hardware": 0.0011854171752929688, "__label__health": 0.0006976127624511719, "__label__history": 0.00030422210693359375, "__label__home_hobbies": 0.0001418590545654297, "__label__industrial": 0.0004963874816894531, "__label__literature": 0.0003862380981445313, "__label__politics": 0.000244140625, "__label__religion": 0.0004193782806396485, "__label__science_tech": 0.11572265625, "__label__social_life": 0.00013697147369384766, "__label__software": 0.012603759765625, "__label__software_dev": 0.86181640625, "__label__sports_fitness": 0.0002968311309814453, "__label__transportation": 0.0004730224609375, "__label__travel": 0.0001938343048095703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55519, 0.03125]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55519, 0.615]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55519, 0.91888]], "google_gemma-3-12b-it_contains_pii": [[0, 4797, false], [4797, 11142, null], [11142, 17356, null], [17356, 21172, null], [21172, 27475, null], [27475, 34021, null], [34021, 39312, null], [39312, 42472, null], [42472, 47210, null], [47210, 52554, null], [52554, 55519, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4797, true], [4797, 11142, null], [11142, 17356, null], [17356, 21172, null], [21172, 27475, null], [27475, 34021, null], [34021, 39312, null], [39312, 42472, null], [42472, 47210, null], [47210, 52554, null], [52554, 55519, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55519, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55519, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55519, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55519, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55519, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55519, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55519, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55519, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55519, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55519, null]], "pdf_page_numbers": [[0, 4797, 1], [4797, 11142, 2], [11142, 17356, 3], [17356, 21172, 4], [21172, 27475, 5], [27475, 34021, 6], [34021, 39312, 7], [39312, 42472, 8], [42472, 47210, 9], [47210, 52554, 10], [52554, 55519, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55519, 0.08649]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
442d4c37c96bd826010ecf3233a8e513215c92fa
|
Sieve: Actionable Insights from Monitored Metrics in Distributed Systems
Jörg Thalheim\(^1\), Antonio Rodrigues\(^2\), Istemi Ekin Akkus\(^3\), Pramod Bhatotia\(^1\), Ruichuan Chen\(^3\), Bimal Viswanath\(^4\), Lei Jiao\(^5\)\(^\#\), Christof Fetzer\(^6\)
\(^1\)University of Edinburgh, \(^2\)Carnegie Mellon Univ., \(^3\)NOKIA Bell Labs, \(^4\)University of Chicago, \(^5\)University of Oregon, \(^6\)TU Dresden
Abstract
Major cloud computing operators provide powerful monitoring tools to understand the current (and prior) state of the distributed systems deployed in their infrastructure. While such tools provide a detailed monitoring mechanism at scale, they also pose a significant challenge for the application developers/operators to transform the huge space of monitored metrics into useful insights. These insights are essential to build effective management tools for improving the efficiency, resiliency, and dependability of distributed systems.
This paper reports on our experience with building and deploying **Sieve**—a platform to derive actionable insights from monitored metrics in distributed systems. **Sieve** builds on two core components: a metrics reduction framework, and a metrics dependency extractor. More specifically, **Sieve** first reduces the dimensionality of metrics by automatically filtering out unimportant metrics by observing their signal over time. Afterwards, **Sieve** infers metrics dependencies between distributed components of the system using a predictive-causality model by testing for Granger Causality.
We implemented **Sieve** as a generic platform and deployed it for two microservices-based distributed systems: OpenStack and ShareLatex. Our experience shows that (1) **Sieve** can reduce the number of metrics by at least an order of magnitude (10 – 100\(^x\)), while preserving the statistical equivalence to the total number of monitored metrics; (2) **Sieve** can dramatically improve existing monitoring infrastructures by reducing the associated overheads over the entire system stack (CPU—80%, storage—90%, and network—50%); (3) Lastly, **Sieve** can be effective to support a wide-range of workflows in distributed systems—we showcase two such workflows: Orchestration of autoscaling, and Root Cause Analysis (RCA).
Keywords Microservices, Time series analysis
\(^\#\)Authors did part of the work at NOKIA Bell Labs.
1 Introduction
Most distributed systems are constantly monitored to understand their current (and prior) states. The main purpose of monitoring is to gain actionable insights that would enable a developer/operator to take appropriate actions to better manage the deployed system. Such insights are commonly used to manage the health and resource requirements as well as to investigate and recover from failures (root cause identification). For these reasons, monitoring is a crucial part of any distributed system deployment.
All major cloud computing operators provide a monitoring infrastructure for application developers (e.g., Amazon CloudWatch \(^2\), Azure Monitor \(^12\), Google StackDriver \(^5\)). These platforms provide infrastructure to monitor a large number (hundreds or thousands) of various application-specific and system-level metrics associated with a cloud application. Although such systems feature scalable measurement and storage frameworks to conduct monitoring at scale, they leave the task of transforming the monitored metrics into usable knowledge to the developers. Unfortunately, this transformation becomes difficult with the increasing size and complexity of the application.
In this paper, we share our experience on: **How can we derive actionable insights from the monitored metrics in distributed systems?** In particular, given a large number of monitored metrics across different components (or processes) in a distributed system, we want to design a platform that can derive actionable insights from the monitored metrics. This platform could be used to support a wide-range of use cases to improve the efficiency, resiliency, and reliability of distributed systems.
In this work, we focus on microservices-based distributed systems because they have become the de-facto way to design and deploy modern day large-scale web applications \(^48\). The microservices architecture is an ideal candidate for our study for two reasons: First, microservices-based applications have a large number of distributed components (hundreds to thousands \(^45, 56\)) with complex communication patterns, each component usually exporting several metrics for the purposes of debugging, performance diagnosis, and application management. Second, microservices-based applications are developed at a rapid pace: new features are being continuously integrated and deployed. Every new update may fix some existing issues, introduce new features, but can also introduce a new bug. With this rapid update schedule, keeping track of the changes in the application as a whole with effects propagating to other components becomes critical for reliability, efficiency, and management purposes.
The state-of-the-art management infrastructures either rely on ad hoc techniques or custom application-specific tools. For instance, prior work in this space has mostly focused on analyzing message-level traces (instead of monitored metrics) to generate a causal model of the application to debug performance issues \(^30, 42\). Alternatively, developers usually create and use custom tools to address the complexity of understanding the application as a whole. For example, Netflix developed several application-specific tools for such purposes \(^8, 45\) by instrumenting the entire application. These approaches require either complicated instrumentation or sophisticated techniques to infer happens-before relationships (for the causal model) by analyzing message trace timestamps, making them inapplicable for broader use.
This paper presents our experience with designing and building **Sieve**, a system that can utilize an existing monitoring infrastructure (i.e., without changing the monitored information) to infer actionable insights for application management. **Sieve** takes a data-driven approach to enable better management of microservices-based applications. At its core, **Sieve** is composed of two key modules: (1) a metric reduction engine that reduces the dimensionality of the metric space by filtering out metrics that carry redundant information, (2) a metric dependency extractor that builds a causal...
model of the application by inferring causal relationships between metrics associated with different components.
Module (1) enables Sieve to identify "relevant" metrics for a given application management task. For instance, it might be sufficient to monitor only a few metrics associated with error states of the application instead of the entire set when monitoring the health of the application. It is important to also note that reducing the metric space has implications for deployment costs: frameworks like Amazon CloudWatch use a per-metric charging model, and not identifying relevant metrics can significantly drive up the cost related to monitoring the application.
Module (2) is crucial for inferring actionable insights because it is able to automatically infer complex application dependencies. In a rapidly updating application, the ability to use such complex dependencies and how they may change is important for keeping one’s understanding of the application as a whole up-to-date. Such up-to-date information can be helpful for developers to quickly react to any problem that may arise during deployment.
We implemented Sieve as a generic platform, and deployed it with two microservices-based distributed systems: ShareLatex [22] and OpenStack [26]. Our experience shows that (1) Sieve can reduce the number of monitored metrics by an order of magnitude (10−100×), while preserving the statistical equivalence to the total number of monitored metrics. In this way, the developers/operators can focus on the important metrics that actually matter. (2) Sieve can dramatically improve the efficiency of existing metrics monitoring infrastructures by reducing the associated overheads over the entire system stack (CPU−80%, storage−90%, and network−50%). This is especially important for systems deployed in a cloud infrastructure, where the monitoring infrastructures (e.g. AWS CloudWatch) charge customers for monitoring resources. And finally, (3) Sieve can be employed for supporting a wide-range of workflows. We showcase two such case-studies: In the first case study, we use ShareLatex [22] and show how Sieve can help developers orchestrate autoscaling of microservices-based applications. In the second case study, we use OpenStack [26] and show how developers can take advantage of Sieve’s ability to infer complex dependencies across various components in microservices for Root Cause Analysis (RCA). Sieve’s source code with the full experimentation setup is publicly available: https://sieve-microservices.github.io/.
## 2 Overview
In this section, we first present some background on microservices-based applications and our motivation to focus on them. Afterwards, we present our goals, and design overview.
### 2.1 Background and Motivation
Microservices-based applications consist of loosely-coupled distributed components (or processes) that communicate via well-defined interfaces. Designing and building applications in this way increases modularity, so that developers can work on different components and maintain them independently. These advantages make the microservices architecture the de facto design choice for large-scale web applications [48].
While increasing modularity, such an approach to developing software can also increase the application complexity: As the number of components increases, the interconnections between components also increases. Furthermore, each component usually exports several metrics for the purposes of debugging, performance diagnosis, and application management. Therefore, understanding the dependencies between the components and utilizing these dependencies with the exported metrics becomes a challenging task. As a result, understanding how the application performs as a whole becomes increasingly difficult.
Typical microservices-based applications are composed of hundreds of components [45, 56]. Table 1 shows real-world microservices-based applications that have tens of thousands of metrics and hundreds of components. We experimented with two such applications, Sharelatex [22] and OpenStack [18], each having several thousands of metrics and order of tens of components. The metrics in these applications come from all layers of the application like hardware counters, resource usage, business metrics or application-specific metrics.
To address this data overload issue, developers of microservices-based applications usually create ad hoc tools. For example, application programmers at Netflix developed several application-specific tools for such purposes [8, 45]. These tools, however, require the application under investigation to be instrumented, so that the communication pattern between components can be established by following requests coming into the application. This kind of instrumentation requires coordination among developers of different components, which can be a limiting factor for modularity.
Major cloud computing operators also provide monitoring tools for recording all metric data from all components. For example, Amazon CloudWatch [2], Azure Monitor [12], and Google StackDriver [5]. These monitoring tools aid in visualizing and processing metric data in real-time (i.e., for performance observation) or after an issue with the application (i.e., for debugging). These tools, however, either use a few system metrics that are hand-picked by developers based on experience, or simply record all metric data for all the components.
Relying on past experience may not always be effective due to the increasing complexity of a microservices-based application. On the other hand, recording all metric data can create significant monitoring overhead in the network and storage, or in the case of running the application in a cloud infrastructure (e.g., AWS), it can incur costs due to the provider charging the customers (e.g., CloudWatch). For these reasons, it is important to understand the dependencies between the components of a microservice-based application. Ideally, this process should not be intrusive to the application. Finally, it should help the developers to identify and minimize the critical components and metrics to monitor.
### 2.2 Design Goals
While designing Sieve, we set the following goals.
- **Generic:** Many tools for distributed systems have specific goals, including performance debugging, root cause analysis and orchestration. Most of the time, these tools are custom-built for the application in consideration and target a certain...
goal. Our goal is to design a generic platform that can be used for a wide-range of workflows.
- **Automatic**: The sheer number of metrics prohibits manual inspection. On the other hand, designing a generic system to help developers in many use cases might require manually adjusting some parameters for each use case. Our tool should be as automated as possible while reducing the number of metrics and extracting their relationships. However, we leave the utilization of our platform’s output to the developers, who may have different goals.
- **Efficient**: Our platform’s operation should be as efficient as possible. Minimizing analysis time becomes important when considering distributed systems, such as microservices-based applications.
### 2.3 SIEVE Overview
The underlying intuition behind **SIEVE** is two-fold: Firstly, in the metric dimension, some metrics of a component may behave with similar patterns as other metrics of that component. Secondly, in the component dimension, there are dependencies between components. As a result, monitoring all metrics of all components at runtime may be unnecessary and inefficient (as components are not independent).
In this paper, we present **SIEVE** to reduce this complexity by systematically analyzing the application to filter collected metrics and to build a dependency graph across components. To showcase the generality of this dependency graph and its benefits, we then utilize **SIEVE** to orchestrate autoscaling of the ShareLatex [22] application—an online collaboration tool, and to perform Root Cause Analysis (RCA) in OpenStack [26]—a cloud management software (§4).
At a high level, **SIEVE**’s design follows three steps as shown in Figure 1.
**Step #1: Load the application.** **SIEVE** uses an application-specific load generator to stress the application under investigation. This load generator can be provided by the application developers. For example, OpenStack already uses a load generator named Rally [20]. During the load, **SIEVE** records the communications among components to obtain a call graph. This recording does not require any modifications to the application code. In addition, **SIEVE** records all exposed metrics by all components. Note that this recording only happens during the creation of the call graph and not during runtime.
**Step #2: Reduce metrics.** After collecting the metrics, **SIEVE** analyzes each component and organizes its metrics into fewer groups via clustering, so that similar-behaving metrics are clustered together. After clustering, **SIEVE** picks a representative metric from each cluster. These representative metrics as well as their clusters in a sense characterize each component.
**Step #3: Identify dependencies.** In this step, **SIEVE** explores the possibilities of one component’s representative metrics affecting another component’s metrics using a pairwise comparison method: each representative metric of one component is compared with each representative metric of another component. **SIEVE** uses the call graph obtained in Step 1 to choose the components to be compared (i.e., components directly communicating) and the representative metrics determined in Step 2. As a result, the search space is significantly reduced compared to the naive approach of comparing all components with every other component using all metrics.
If **SIEVE** determines that there is a relationship between a metric of one component and another metric of another component, a dependency edge between these components is created using the corresponding metrics. The direction of the edge depends on which component is affecting the other.
### 3 Design
In this section, we detail the three steps of **SIEVE**.
#### 3.1 Load the Application
For our systematic analysis, we first run the application under various load conditions. This loading serves two purposes: First, the load exposes a number of metrics from the application as well as the infrastructure it runs on. These metrics are then used to identify potential relationships across components. Second, the load also enables us to obtain a call graph, so that we can identify the components that communicate with each other. The call graph is later used to reduce the amount of computation required to identify the inter-component relationships (§3.3). The load test is intended to be run in an offline step and not in production.
**Obtaining metrics.** During the load of the application, we record metrics as time series. There are two types of metrics that we can leverage for our analysis: First, there are system metrics that are obtained from the underlying operating system. These metrics report the resource usage of a microservice component, and are usually related to the hardware resources on a host. Examples include usages in CPU, memory, network and disk I/O.
Second, there are application-level metrics. Application developers often add application-specific metrics (e.g., number of active users, response time of a request in a component). Commonly-used components report the resource usage of a microservice component, and are usually related to the hardware resources on a host. Examples include usages in CPU, memory, network and disk I/O.
**Obtaining the call graph.** Generally speaking, applications using a microservices architecture communicate via well-defined interfaces similar to remote procedure calls. We model these communications between the components as a directed graph, where the vertices represent the microservice components and the edges point from the caller to the callee providing the service.
By knowing which components communicate directly, we can reduce the number of component pairs we need to check to see whether they have a relation (see Section 3.3). Although it is possible to manually track this information for smaller-sized applications, this process becomes quickly difficult and error-prone with increasing number of components.
There are several ways to understand which microservice components are communicating with each other. One can instrument the application, so that each request can be traced from the point it enters the application to the point where the response is returned to the user. Dapper [81] from Google and Atlas [45, 58] from Netflix rely on instrumenting their RPC middleware to trace requests.
Another method to obtain communicating components is to monitor network traffic between hosts running those components using a tool like tcpdump. After obtaining the traffic, one can map the exchanged packets to the components via their source/destination addresses. This method can produce communicating component pairs by parsing all network packets, adding significant computational overhead and increasing the analysis time. Furthermore, it is possible that many microservice components are deployed onto the same host (e.g., using containers), making the packet parsing difficult due to network address translation on the host machine.
One can also observe system calls related to network operations via APIs such as ptrace() [28]. However, this approach adds a lot of context switches between the tracer and component under observation.
Sieve employs systemd to obtain the communicating pairs. systemd[23] is a recent project providing a new method to observe system calls in a more efficient way. Utilizing a kernel module, systemd provides system calls as an event stream to a user application. The event stream also contains information about the monitored processes, so that network calls can be mapped to microservice components, even if they are running in containers. Furthermore, it enables extraction of the communication peer via user-defined filters. Employing systemd, we avoid the shortcomings of the above approaches: 1) We do not need to instrument the application, which makes our system more generally applicable, 2) We add little overhead to obtain the call graph of an application for our analysis (see Section 6.1.5).
### 3.2 Reduce Metrics
The primary goal of exporting metrics is to understand the performance of applications, orchestrating them and debugging them. While the metrics exported by the application developers or commonly-used microservice components may be useful for these purposes, it is often the case that the developers have little idea regarding which ones are going to be most useful. Developers from different backgrounds may have different opinions: a developer specializing in network communications may deem network I/O as the most important metric to consider, whereas a developer with a background on algorithms may find CPU usage more valuable. As a result of these varying opinions, often times many metrics are exported.
While it may look like there is no harm in exporting as much information as possible about the application, it can create problems. Manually investigating the obtained metrics from a large number of components becomes increasingly difficult with the increasing number of metrics and components [35]. This complexity reflects on the decisions that are needed to control and maintain the application. In addition, the overhead associated with the collection and storage of these metrics can quickly create problems. In fact, Amazon CloudWatch [2] charges its customers for the reporting of the metrics they export. As a result, the more metrics an application has to export, the bigger the cost the developers would have to bear.
One observation we make is that some metrics strongly correlate with each other and it might not be necessary to consider all of them when making decisions about the control of the application. For example, some application metrics might be strongly correlated with each other due to the redundancy in choosing which metrics to export by the developers. It is also possible that different subsystems in the same component report similar information (e.g., overall memory vs. heap usage of a process). In addition, some system metrics may offer clues regarding the application’s state: increased network I/O may indicate an increase in the number of requests.
The direct outcome of this observation is that it should be possible to reduce the dimensionality of the metrics the developers have to consider. As such, the procedure to enable this reduction should happen with minimal user effort and scale with increased numbers of metrics.
To achieve these requirements, Sieve uses a clustering approach named k-Shape [73] with a pre-filtering step. While other approaches such as principal component analysis (PCA) [49] and random projections [72] can also be used for dimensionality reduction, these approaches either produce results that are not easily interpreted by developers (i.e., PCA) or sacrifice accuracy to achieve performance and have stability issues producing different results across runs (i.e., random projections). On the other hand, clustering results can be visually inspected by developers, who can also use any application-level knowledge to validate their correctness. Additionally, clustering can also uncover hidden relationships which might not have been obvious.
#### Filtering unvarying metrics
Before we use k-Shape, we first filter metrics with constant trend or low variance ($\text{var} \leq 0.002$). These metrics cannot provide any new information regarding the relationships across components, because they are not changing according to the load applied to the application. Removing these metrics also enables us to improve the clustering results.
##### k-Shape clustering
k-Shape is a recent clustering algorithm that scales linearly with the number of metrics. It uses a novel distance metric called shape-based distance (SBD). SBD is based on a normalized form of cross correlation (NCC) [73]. Cross correlation is calculated using Fast Fourier Transformation and normalized using the geometric mean of the autocorrelation of each individual metric’s time series. Given two time series vectors, $\mathbf{x}$ and $\mathbf{y}$, SBD will take the position $w$, when sliding $\mathbf{x}$ over $\mathbf{y}$, where the normalized cross correlation maximizes.
$$SBD(\mathbf{x}, \mathbf{y}) = 1 - \max_w (\text{NCC}_w(\mathbf{x}, \mathbf{y}))$$
(1)
Because k-Shape uses a distance metric based on the shape of the investigated time series, it can detect similarities in two time series, even if one lags the other in the time dimension. This feature is important to determine relationships across components in microservices-based applications because a change in one metric in one component may not reflect on another component’s metrics immediately (e.g., due to the network delay of calls between components).
Additionally, k-Shape is robust against distortion in amplitude because data is normalized via z-normalization ($z = \frac{x-\mu}{\sigma}$) before
being processed. This feature is especially important because different metrics may have different units and thus, may not be directly comparable.
$k$-Shape works by initially assigning time series to clusters randomly. In every iteration, it computes new cluster centroids according to SBD with the assigned time series. These centroids are then used to update the assignment for the next iteration until the clusters converge (i.e., the assignments do not change).
We make three adjustments to employ $k$-Shape in SIEVE. First, we preprocess the collected time series to be compatible with $k$-Shape. $k$-Shape expects the observations to be equidistantly distributed in the time domain. However, during the load of the application, timeouts or lost packets can cause gaps between the measurements.
To reconstruct missing data, we use spline interpolation of the third order (cubic). A spline is defined piecewise by polynomial functions. Compared to other methods such as averages of previous values or linear interpolation, spline interpolation provides a higher degree of smoothness. It therefore introduces less distortion to the characteristics of a time-series [66]. Additionally, monitoring systems retrieve metrics at different points in time and need to be discretized to match each other. In order to increase the matching accuracy, we discretize using 500ms instead of the original 2s used in the original $k$-Shape paper [73].
Our second adjustment is to change the initial assignments of metric time series to clusters. To increase clustering performance and reduce the convergence overhead, we pre-cluster metrics according to their name similarity (e.g., Jaro distance [60]) and use these clusters as the initial assignment instead of the default random assignment. This adjustment is reasonable given that many developers use naming conventions when exporting metrics relating to the same component or resource in question (e.g., ‘cpu_usage’, ‘cpu_usage_percentile’). The number of iterations to converge should decrease compared to the random assignment, because similar names indicate similar metrics. Note that this adjustment is only for performance reasons; the convergence of the $k$-Shape clustering does not require any knowledge of the variable names and would not be affected even with a random initial assignment.
During the clustering process, $k$-Shape requires the number of clusters to be previously determined. In an application with several components, each of which having various number of metrics, pre-determining the ideal number of clusters may not be straightforward. Our final adjustment is to overcome this limitation: we iteratively vary the number of clusters used by $k$-Shape and pick the number that gives the best silhouette value [78], which is a technique to determine the quality of the clusters. The silhouette value is $-1$ when the assignment is wrong and $1$ when it is a perfect assignment [29]. We use the SBD as a distance measure in the silhouette computation.
In practice, experimenting with a small number of clusters is sufficient. For our applications, seven clusters per component was sufficient, where each component had up to 300 metrics.
Representative metrics. After the clustering, each microservice component will have one or more clusters of metrics. The number of clusters will most likely be much smaller than the number of metrics belonging to that component. Once these clusters are obtained, SIEVE picks one representative metric from each cluster. To pick the representative metric from each cluster, SIEVE determines the SBD between each metric and the corresponding centroid of the cluster.
The metric with the lowest metric is chosen as the representative metric for this cluster.
The high-level idea is that the behavior of the cluster will match this representative metric; otherwise, the rest of the metrics in the cluster would not have been in the same cluster as this metric. The set of representative metrics of a component can then be used to describe a microservice component’s behavior. These representative metrics are then used in conjunction with the call graph obtained in Section 3.1 to identify and understand the relationships across components.
3.3 Identify Dependencies
To better understand an application, we need to find dependencies across its components. A naive way of accomplishing this goal would be to compare all components with each other using all possible metrics. One can clearly see that with the increasing number of components and metrics, this would not yield an effective solution.
In the previous section, we described how one can reduce the number of metrics one has to consider in this pairwise comparison by clustering and obtaining the representative metrics of each component. Still, comparing all pairs of components using this reduced set of metrics may be inefficient and redundant considering the number of components in a typical microservices-based application (e.g., tens or hundreds).
SIEVE uses the call graph obtained in Section 3.1 to reduce the number of components that need to be investigated in a pairwise fashion. For each component, we do pairwise comparisons using each representative metric of its clusters with each of its neighbouring components (i.e., callees) and their representative metrics.
SIEVE utilizes Granger Causality tests [54] in this pairwise comparison. Granger Causality tests are useful in determining whether a time series can be used in predicting another time series: In a microservices-based application, the component interactions closely follow the path a request takes inside the application. As a result, these interactions can be predictive of the changes in the metrics of the components in the path. Granger Causality tests offer a statistical approach in understanding the relationships across these components. Informally, Granger Causality is defined as follows. If a metric $X$ is Granger-causing another metric $Y$, then we can predict $Y$ better by using the history of both $X$ and $Y$ compared to only using the history of $Y$ [51].
To utilize Granger Causality tests in SIEVE, we built two linear models using the ordinary least-square method [32]. First, we compare each metric $X_t$ with another metric $Y_t$. Second, we compare each metric $X_t$ with the time-lagged version of the other metric $Y_{t-Lag}$. Covering the cases with a time lag is important because the load in one component may not be reflected on another component until the second component receives API calls and starts processing them.
SIEVE utilizes short delays to build the time-lagged versions of metrics. The reason is that microservices-based applications typically run in the same data center and their components communicate over a LAN, where typical round-trip times are in the order of milliseconds. SIEVE uses a conservative delay of 500ms for unforeseen delays.
To apply the Granger Causality tests and check whether the past values of metric $X$ can predict the future values of metric $Y$, both models are compared via the F-test [67]. The null hypothesis (i.e.,
X does not granger-cause Y) is rejected if the p-value is below a critical value.
However, one has to consider various properties of the time series. For example, the F-test requires the time series to be normally distributed. The load generation used in Section 3.1 can be adjusted to accommodate this requirement. Also, the F-test might find spurious regressions when non-stationary time series are included [53]. Non-stationary time series (e.g., monotonically increasing counters for CPU and network interfaces) can be found using the Augmented Dickey-Fuller test [55]. For these time series, the first difference is taken and then used in the Granger Causality tests. Although longer trends may be lost due to the first difference, accumulating metrics such as counters do not present interesting relationships for our purposes.
After applying the Granger Causality test to each component’s representative metrics with its neighbouring component’s representative metrics, we obtain a graph. In this graph, we draw an edge between microservice components, if one metric in one component Granger-causes another metric in a neighbouring component. This edge represents the dependency between these two components and its direction is determined by Granger causality.
While Granger Causality tests are useful in determining predictive causality across microservice components, it has some limitations that we need to consider. For example, it does not cover instantaneous relationships between two variables. More importantly, it might reveal spurious relationships, if important variables are missing in the system: if both X and Y depend on a third variable Z that is not considered, any relationship found between X and Y may not be useful. Fortunately, an indicator of such a situation is that both metrics will Granger-cause each other (i.e., a bidirectional edge in the graph). {Sieve} filters these edges out.
### 4 Applications
In this section, we describe two use cases to demonstrate {Sieve}’s ability to handle different workflows. In particular, using {Sieve}’s base design, we implemented 1) an orchestration engine for autoscaling and applied it to ShareLatex [22], and 2) a root cause analysis (RCA) engine and applied it to OpenStack [18].
#### 4.1 Orchestration of Autoscaling
For the autoscaling case study, we used ShareLatex [22]—a popular collaborative LaTeX editor. ShareLatex is structured as a microservices-based application, delegating tasks to multiple well-defined components that include a KV-store, load balancer, two databases and 11 node.js based components.
{Sieve}’s pairwise investigation of representative metrics of components produces the dependencies across components. By leveraging this dependency graph, our autoscaling engine helps developers to make more informed decisions regarding which components and metrics are more critical to monitor. As a result, developers can generate scaling rules with the goal of adjusting the number of active component instances, depending on real-time workload.
More specifically, we use {Sieve}’s dependency graph and extract (1) guiding metrics (i.e., metrics to use in a scaling rule), (2) scaling actions (i.e., actions associated with reacting to varying loads by increasing/decreasing the number of instances subject to minimum/maximum thresholds), and (3) scaling conditions (i.e., conditions based on a guiding metric triggering the corresponding scaling action). Below, we explain how we use {Sieve} to generate a scaling rule:
**#1: Metric.** We pick a metric \( m \) that appears the most in Granger causality relations between components.
**#2: Scaling actions.** In our case study, we restrict scaling actions to scale in/out actions, with increments/decrements of a single component instance (+/−1).
**#3: Conditions.** The scale in/out thresholds are defined from the values of \( m \) according to a Service Level Agreement (SLA) condition. For ShareLatex, such an SLA condition can be to keep 90% of all request latencies below 1000ms. The thresholds for \( m \) are iteratively refined during the application loading phase.
#### 4.2 Root Cause Analysis
For the root cause analysis (RCA) case study, we used OpenStack [18, 26], a popular open-source cloud management software. OpenStack is structured as a microservices-based application with a typical deployment of \( \sim 10 \) (or more) individual components, each often divided into multiple sub-components [80]. Due to its scale and complexity, OpenStack is susceptible to faults and performance issues, often introduced by updates to its codebase.
In microservices-based applications such as Openstack, components can be updated quite often [59], and such updates can affect other application components. If relationships between components are complex, such effects may not be easily foreseeable, even when inter-component interfaces are unchanged (e.g., if the density of inter-component relationships is high or if the activation of relationships is selective depending on the component’s state and inputs). {Sieve}’s dependency graph can be used to understand the update’s overall effect on the application: changing dependency graphs can indicate potential problems introduced by an update. By identifying such changes, {Sieve} can help developers identify the root cause of the problem.
Our RCA engine leverages {Sieve} to generate a list of possible root causes of an anomaly in the monitored application. More specifically, the RCA engine compares the dependency graphs of two different versions of an application: (1) a correct version; and (2) a faulty version. Similarly to [61, 63], we assume that the system anomaly (but not its cause) has been observed and the correct and faulty versions have been identified. The result of this comparison is a list of \([\text{component}, \text{metric list}]\) pairs: the component item points to a component as a possible source for the issue, whereas the metric list shows the metrics in that component potentially related to the issue, providing a more fine-grained view. With the help of this list, developers can reduce the complexity of their search for the root cause.
| Table 2. Description of dependency graph differences considered by the root cause analysis engine. |
|-----------------|-----------------------------------------------------|
| Scoping level | Differences of interest |
| Component metrics | Present in F version, not in C (new) |
| Clusters | Cluster includes new/discard metrics |
| Dep. graph edges | New/discard edge between similar clusters |
| | Includes clusters w/ new/discard metrics |
Figure 2. SIEVE’s root cause analysis methodology.
Figure 3. Pairwise adjusted mutual information (AMI) scores between 3 measurements.
Figure 2 shows the five steps involved in the comparison. At each step, we extract and analyze SIEVE’s outputs at three different granularity levels: metrics, clusters, and dependency graph edges. The levels and corresponding differences of interest are described in Table 2. We describe the steps in more detail below.
**#1: Metric analysis.** This step analyzes the presence or absence of metrics between C and F versions. If a metric m is present in both C and F, it intuitively represents the maintenance of healthy behavior associated with m. As such, these metrics are filtered out of this step. Conversely, the appearance of a new metric (or the disappearance of a previously existing metric) between versions is likely to be related with the anomaly.
**#2: Component rankings.** In this step, we use the results of step 1 to rank components according to their novelty score (i.e., total number of new or discarded metrics), producing an initial group of interesting components for RCA.
**#3: Cluster analysis: novelty & similarity.** Clusters aggregate component metrics which exhibit similar behavior over time. The clusters with new or discarded metrics should be more interesting for RCA compared to the unchanged clusters of that component (with some exceptions, explained below). For a given component, we compute the novelty scores of its clusters as the sum of the number of new and discarded metrics, and produce a list of [component, metric list] pairs, where the metric list considers metrics from the clusters with higher novelty scores.
In addition, we track the similarity of a component’s clusters between C and F versions (or vice-versa). This is done to identify two events: (1) appearance (or disappearance) of edges between versions; and (2) attribute changes in relationships maintained between C and F versions (e.g., a change in Granger causality time lag). An edge between clusters x and y (belonging to components A and B, respectively) is said to be ‘maintained between versions’ if their respective metric compositions do not change significantly between C and F versions, i.e. if \( S(M^A_{x,C}) \approx S(M^A_{x,F}) \) and \( S(M^B_{y,C}) \approx S(M^B_{y,F}) \). \( M^A_{x,C} \) and \( M^A_{x,F} \) are the metric compositions of clusters x and \( x' \) of component A, in the C and F versions, respectively. \( S \) is some measure of cluster similarity (defined below). Both events – (1) and (2) – can be an indication of an anomaly, because one would expect edges between clusters with high similarity to be maintained between versions.
We compute the **cluster similarity score**, \( S \), according to a modified form of the Jaccard similarity coefficient
\[
S = \frac{|M^A_{i,C} \cap M^B_{i,F}|}{|M^A_{i,C}|} \quad (2)
\]
To eliminate the penalty imposed by new metrics added to the faulty cluster, we only consider the contents of the correct cluster in the denominator (instead of the union of \( M^A_{i,C} \) and \( M^B_{i,F} \)).
**#4: Edge filtering.** To further reduce the list of [component, metric list] pairs, we examine the relationships between components and clusters identified in steps 2 and 3. We identify three events:
1. Edges involving (at least) one cluster with a high novelty score
2. Appearance or disappearance of edges between clusters with high similarity
3. Changes in time lag in edges between clusters with high similarity
Event 1 isolates metrics related to edges which include at least one ‘novel’ cluster. Events 2 and 3 isolate clusters which are maintained between C and F versions, but become interesting for RCA due to a change in their relationship. Novelty and similarity scores are computed as in step 3. We define thresholds for ‘high’ novelty and similarity scores.
**#5: Final rankings.** We present a final list of [component, metric list] pairs. The list is ordered by component, following the rank given in step 2. The **metric list** items include the metrics identified at steps 3 and 4.
5 Implementation
We next describe the implementation details of SIEVE. Our system implementation, including used software versions, is published at https://sieve-microservices.github.io. For load generation, SIEVE requires an application-specific load generator. We experimented with two microservices-based applications: ShareLaTeX [22] and OpenStack [18, 26]. For ShareLaTeX, we developed our own load generator using Locust [10], a Python-based distributed load generation tool to simulate virtual users in the application (1,041 LoC). For OpenStack, we used Rally [20], the official benchmark suite from OpenStack.
For metric collection, SIEVE uses Telegraf [24] to collect application/system metrics and stores them in InfluxDB [7]. Telegraf seamlessly integrates with InfluxDB, supports metrics of commonly-used components (e.g., Docker, RabbitMQ, memcached) and can run custom scripts for collection of additional metrics exposed by application APIs (e.g., [19]). With this setup, SIEVE can store any time-series metrics exposed by microservice components.
For the call graph extraction, SIEVE leverages sysdig call tracer [23] to obtain which microservice components communicate with each other. We wrote custom scripts to record network system calls with source and destination IP addresses on every machine hosting the components (457 LoC). These IP addresses are then mapped to the components using the cluster manager’s service discovery mechanism.
We implemented SIEVE’s data analytics techniques in Python (2243 LoC) including metric filtering, clustering based on k-Shape, and Granger Causality. The analysis can also be distributed across multiple machines for scalability.
Lastly, we also implemented two case studies based on the SIEVE infrastructure: autoscaling in ShareLaTeX (720 LoC) and RCA in OpenStack (507 LoC). For our autoscaling engine, we employed Kapacitor [9] to stream metrics from InfluxDB in real-time and to install our scaling rules using its user-defined functions. For the RCA engine, we implemented two modules in Python: one module extracts metric clustering data (125 LoC) and the other module (382 LoC) compares clustering data and dependency graphs.
6 Evaluation
Our evaluation answers the following questions:
1. How effective is the general SIEVE framework? (§6.1)
2. How effective is SIEVE for autoscaling? (§6.2)
3. How effective is SIEVE for root cause analysis? (§6.3)
6.1 SIEVE Evaluation
Before we evaluate SIEVE with the case studies, we evaluate SIEVE’s general properties: (a) the robustness of clustering; (b) the effectiveness of metric reduction; and (c) the monitoring overhead incurred by SIEVE’s infrastructure.
Experimental setup. We ran our measurements on a 10 node cluster, every node with a 4-core Xeon E5405 processor, 8 GB DDR2-RAM and a 500GB HDD. For the general experiments, we loaded ShareLaTeX using SIEVE five times with random workloads. The random workloads also help to validate whether the model stays consistent, if no assumption about the workload is made.
...
We next evaluate the effectiveness of *Sieve* for the orchestration of autoscaling in microservices.
**Experimental setup.** For the autoscaling case study, we used ShareLaTeX [22] (as described in §4.1). We used 12 t2.large VM-Instances on Amazon EC2 with 2 vCPUs, 8GB RAM and 20 GB Amazon EBS storage. This number of instances was sufficient to stress-test all components of the application. The VM instances were allocated statically during experiments as Docker containers. We created a Docker image for each ShareLaTeX component and used Rancher [21] as the cluster manager to deploy our containers across different hosts.
**Dataset.** We used a HTTP trace sample from soccer world cup 1998 [6] for an hour long trace. Note that the access pattern and requested resources in the world cup trace differs from the ShareLaTeX application. However, we used the trace to map traffic patterns for our application to generate a realistic spike workload. In particular, sessions in the HTTP trace were identified by using the client IP. Afterwards, we enqueued the sessions based on their timestep, where a virtual user was spawned for the duration of each session and then stopped.
**Results.** We chose an SLA condition, such that 90th percentile of all request latencies should be below 1000ms. Traditional tools, such as Amazon AWS Auto Scaling [1], often use the CPU usage as the default metric to trigger autoscaling. *Sieve* identified an application metric named `http-requests_Project_id_GET_mean` (Figure 6) as a better metric for autoscaling than CPU usage.
To calculate the threshold values to trigger autoscaling, we used a 5-minute sample from the peak load of our HTTP trace and iteratively refined the values to stay within the SLA condition. As a result, we found that the trigger thresholds for scaling up and down while using the CPU usage metric should be 21% and 1%, respectively. Similarly, for `http-requests_Project_id_GET_mean`, the thresholds for scaling up and down should be 1400ms and 1120ms, respectively.
After installing the scaling actions, we ran our one-hour trace. Table 4 shows the comparison when using the CPU usage and `http-requests_Project_id_GET_mean` for the scaling triggers. When *Sieve*’s selection of metric was used for autoscaling triggers, the average CPU usage of each component was increased. There were also fewer SLA violations and scaling actions.
root causes, providing an appropriate ground truth, and allowing for the identification of 'correct' and 'faulty' code versions. We compare the documented root causes to the lists of root causes produced by our RCA engine. A similar failure is used as a representative case in prior work [52, 80]. Due to space constraints, we refer the analysis of other representative bugs to an extended version of the article available at [84].
### Bug description: Failure to launch a VM
The bug manifests itself as follows: when launching a new VM instance using the command line interface, one gets the error message ‘No valid host was found. There are not enough hosts available.’ despite the availability of compute nodes. Without any other directly observable output, the instance falls into ‘ERROR’ state and fails.
#### Root cause
The failure is caused by the crash of an agent in the Neutron component, namely the Open vSwitch agent. The Open vSwitch agent is responsible for setting up and managing virtual networking for VM instances. The ultimate cause is traced to a configuration error in OpenStack Kolla’s deployment scripts [27].
#### Experimental setup
We deployed OpenStack components as containerized microservices using Kolla [26]. We configured Kolla to deploy 7 main OpenStack components (e.g., Nova, Neutron, Keystone, Glance, Ceilometer) along with several auxiliary components (e.g., RabbitMQ, memcached) for a total of 47 microservices. We use OpenStack’s telemetry component (Ceilometer) to expose relevant OpenStack-related metrics and extract them via Telegraf. The infrastructure consists of two m4.xlarge Amazon EC2 VM instances to run OpenStack components (16 vCPUs, 64 GB RAM and 20 GB Amazon EBS storage) and three t2.medium VM instances (2 vCPUs, 4GB RAM and 20 GB EBS storage) for the supporting components (measurement, database and deployment).
#### Results
We expect the RCA engine’s outcome to include the Neutron component, along with metrics related to VM launches and networking. The {component, metrics list} pairs with Neutron should be ranked higher than others.
To generate load on OpenStack, we run the ‘boot_and_delete’ task 100 times in the Rally benchmark [20], which launches 5 VMs concurrently and deletes them after 15-25 seconds. We apply this process to the correct (C) and faulty (F) versions.
### 6.3 Case-study #2: Root Cause Analysis
To evaluate the applicability of SIEVE to root cause analysis, we reproduce a representative OpenStack anomaly, Launchpad bug #1533942 [27]. We selected this issue because it has well-documented root causes, providing an appropriate ground truth, and allowing for the identification of ‘correct’ and ‘faulty’ code versions. We compare the documented root causes to the lists of root causes produced by our RCA engine. A similar failure is used as a representative case in prior work [52, 80]. Due to space constraints, we refer the analysis of other representative bugs to an extended version of the article available at [84].
#### Table 4. Comparison between a traditional metric (CPU usage) and SIEVE’s selection when used as autoscaling triggers.
<table>
<thead>
<tr>
<th>Metric</th>
<th>CPU usage</th>
<th>SIEVE</th>
<th>Difference [%]</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mean CPU usage per component</td>
<td>5.98</td>
<td>9.26</td>
<td>+54.82</td>
</tr>
<tr>
<td>SLA violations (out of 1400 samples)</td>
<td>188</td>
<td>70</td>
<td>-62.77</td>
</tr>
<tr>
<td>Number of scaling actions</td>
<td>32</td>
<td>21</td>
<td>-34.38</td>
</tr>
</tbody>
</table>
#### Figure 8. Final edge differences for RCA evaluation between top 5 components of Table 5 with similarity threshold of 0.50.
#### Table 5. OpenStack components, sorted by the number of novel metrics between correct (C) and faulty (F) versions.
<table>
<thead>
<tr>
<th>Component</th>
<th>Changed (New/Discarded)</th>
<th>Total (per component)</th>
<th>Final ranking</th>
</tr>
</thead>
<tbody>
<tr>
<td>Nova API</td>
<td>29 (7/22)</td>
<td>59</td>
<td>1</td>
</tr>
<tr>
<td>Nova libvirt</td>
<td>21 (0/21)</td>
<td>39</td>
<td>2</td>
</tr>
<tr>
<td>Nova scheduler</td>
<td>14 (7/7)</td>
<td>30</td>
<td>-</td>
</tr>
<tr>
<td>Neutron server</td>
<td>12 (2/10)</td>
<td>42</td>
<td>3</td>
</tr>
<tr>
<td>RabbitMQ</td>
<td>11 (5/6)</td>
<td>57</td>
<td>4</td>
</tr>
<tr>
<td>Neutron L3 agent</td>
<td>7 (0/7)</td>
<td>39</td>
<td>5</td>
</tr>
<tr>
<td>Nova nonxeproxy</td>
<td>7 (0/7)</td>
<td>12</td>
<td>-</td>
</tr>
<tr>
<td>Glance API</td>
<td>5 (0/5)</td>
<td>27</td>
<td>6</td>
</tr>
<tr>
<td>Neutron DHCP ag.</td>
<td>4 (0/4)</td>
<td>35</td>
<td>7</td>
</tr>
<tr>
<td>Nova compute</td>
<td>3 (0/3)</td>
<td>41</td>
<td>8</td>
</tr>
<tr>
<td>Glance registry</td>
<td>3 (0/3)</td>
<td>23</td>
<td>9</td>
</tr>
<tr>
<td>Haproxy</td>
<td>2 (1/1)</td>
<td>14</td>
<td>10</td>
</tr>
<tr>
<td>Nova conductor</td>
<td>2 (0/2)</td>
<td>29</td>
<td>-</td>
</tr>
<tr>
<td>Other 3 components</td>
<td>0 (0/0)</td>
<td>59</td>
<td>-</td>
</tr>
</tbody>
</table>
| Totals | 113 (22/91) | 508 | - |
#### Figure 7. (a) Cluster novelty score. (b) Edge novelty score. (c) No. of components & clusters after edge filtering w/ varying thresholds.
Step #3: Cluster novelty & similarity. Computing the cluster novelty scores shows that the novel metrics from step 1 are distributed over only 27 of the 67 clusters (Figure 7(a)), even when conservatively considering a cluster to be novel if it contains at least one new or discarded metric. Considering only novel clusters reduces the number of metrics and the number of edges for the developers to analyze for the root cause in step 4. We also compute the similarity scores for these novel clusters and use the similarity in the next step.
Step #4: Edge filtering. By investigating the novel edges (i.e., new or deleted) in the dependency graph, the developers can better focus on understanding which component might be more relevant to the root cause. Utilizing different cluster similarity scores enables developers to filter out some of the edges that may not be relevant. Figures 7(b & c) show the effect of different cluster similarity thresholds for all components in Table 5 when filtering edges. Without any similarity thresholds, there are 42 edges of interest, corresponding to a set of 13 components, 29 clusters and 221 metrics that might be relevant to the root cause (Figure 7(c)). A higher threshold reduces the number of the \{component, metrics list\} pairs: filtering out clusters with inter-version similarity scores below 0.50, there are 24 edges of interest, corresponding to 10 components, 16 clusters and 163 metrics.
Figure 8 shows the edges between the components at the top-5 rows of Table 5, with a similarity threshold of 0.50. Note that one component (i.e., Nova scheduler) was removed by the similarity filter. Also, one of the new edges includes a Nova API component cluster, in which the nova-instances-in-state-ACTIVE metric is replaced by nova-instances-in-state-ERROR. This change relates directly to the observed anomaly (i.e., error in VM launch). The other end of this edge is a cluster in the Neutron component, which aggregates metrics related to VM networking, including a metric named neutron-ports-in-status-DOWN. This observation indicates a causal relationship between the VM failure and a VM networking issue, which is the true root cause of the anomaly.
We also note that a high similarity threshold may filter out useful information. For example, the Neutron component cluster with the neutron-ports-in-status-DOWN metric is removed with similarity thresholds above 0.60. We leave the study of this parameter’s sensitivity to future work.
Step #5: Final rankings. The rightmost column on Table 5 shows the final rankings, considering edge filtering step with a 0.50 similarity threshold. Figure 8 shows a significant reduction in terms of state to analyze (from a total of 16 components and 508 metrics to 10 and 163, respectively) because of the exclusion of non-novel clusters. For example, for Nova API, the number of metrics reduces from 59 to 20 and for Neutron server from 42 to 22. Furthermore, our method includes the Neutron component as one of the top 5 components, and isolates an edge which is directly related with the true root cause of the anomaly.
7 Related Work
Scalable Monitoring. With the increasing number of metrics exposed by distributed cloud systems, the scalability of the monitoring process becomes crucial. Meng et al. [70] optimize monitoring scalability by choosing appropriate monitoring window lengths and adjusting the monitoring intensity at runtime. Canali et al. [39] achieve scalability by clustering metric data. A fuzzy logic approach is used to speed up clustering, and thus obtain data for decision making within shorter periods. Rodrigues et al. [43] explore the trade-off between timeliness and the scalability in cloud monitoring, and analyze the mutual influence between these two aspects based on the monitoring parameters. Our work is complementary to existing monitoring systems since SIEVE aims to improve the efficiency by monitoring less number of metrics.
Distributed Debugging. Systems like Dapper [81] and Pip [77] require the developers to instrument the application to obtain its causal model. X-trace [47] uses a modified network stack to propagate useful information about the application. In contrast, SIEVE does not modify the application code to obtain the call/dependency graph of the application.
Systems such as Fay [46] and DTrace [40] enable developers to dynamically inject debugging requests by developers and require no initial logs of metrics. Pivot Tracing [69] combines dynamic instrumentation with causality tracing. SIEVE can complement these approaches, because it can provide information about interesting components and metrics, so that the developers can focus their efforts to understand them better. Furthermore, SIEVE’s dependency graph is a general tool that can not only be used for debugging, but also for other purposes such as orchestration [86–88].
Data provenance [34, 50, 83] is another technique that can be used to trace the dataflow in the system. SIEVE can also leverage the existing provenance tools to derive the dependence graph.
Metric reduction. Reducing the size and dimensionality of the bulk of metric data exposed by complex distributed systems is essential for its understanding. Common techniques include sampling, and data clustering via k-means and k-medoids. Kollios et al. [64] employ biased sampling to capture the local density of datasets. Sampling based approaches argue for approximate computing [65, 74, 75] to enable a systematic trade-off between the accuracy, and efficiency to collect and compute on the metrics. Zhou et al. [91] simply use random sampling due to its simplicity and low complexity. Ng et al. [71] improved the k-medoid method and made it more effective and efficient. Ding et al. [44] rely on clustering over sampled data to reduce clustering time.
SIEVE’s approach is unique because of its two-step approach: (1) we first cluster time series to identify the internal dependency between any given metrics and then (2) infer the causal relations among time series. Essentially, SIEVE uses two steps of data reduction for better reduction. Furthermore, SIEVE’s time series processing method extracts other useful information such as the time delay of the causal relationship, which can be leveraged in different use cases (e.g., root cause analysis).
Orchestration of autoscaling. Current techniques for autoscaling can be broadly classified into four categories [68]: (i) static and threshold-based rules (offered by most cloud computing providers [3, 4, 16, 25]); (ii) queuing theory [31, 57, 90]; (iii) reinforcement learning [76, 82, 89]; and (iv) time series analysis [41, 62, 79]. Existing systems using these techniques can benefit from the selection of better metrics and/or from the dependencies between components. In this regard, our work is complementary to these techniques: it is intended to provide the developers with knowledge about the application as a whole. In our case study, we showed the benefits of SIEVE for an autoscaling engine using threshold-based rules.
Root Cause Analysis (RCA). Large and complex distributed systems are susceptible to anomalies, whose root causes are often hard
to diagnose [59]. Jiang et al. [61] compare "healthy" and "faulty" metric correlation maps, searching broken correlations. In contrast, Sieve leverages Granger causality instead of simple correlation, allowing for richer causality inference (e.g., causality direction, time lag between metrics). MonitorRank [63] uses metric collection for RCA in a service-oriented architecture. It only analyzes pre-established (component, metric) relations according to a previously-generated call graph. Sieve also uses a call graph, but does not fix metric relations between components, for a richer set of potential root causes. There are other application-specific solutions for RCA (e.g., Hansel [80], Gretel [52]). In contrast, Sieve uses a general approach for understanding the complexity of microservices-based applications that can support RCA as well as other use cases.
8 Experience and Lessons Learned
While developing Sieve, we set ourselves ambitious design goals (described in §2.2). However, we learned the following lessons while designing and deploying Sieve for real-world applications.
Lesson#1. When we first designed Sieve, we were envisioning a dependency graph that was clearly showing the relationships between components (e.g., a tree). As a result, not only would the number of metrics that needed to be monitored be reduced, but also the number of components: one would only need to observe the root(s) of the dependency graph, and make the actions of the dependent components according to the established relationships between the root(s) and them. Such a dependency graph would give the orchestration scenario a huge benefit. Unfortunately, our experience has shown us that the relationships between components are usually not linear, making the dependency graph more complex. Also, there was no obvious root. Consequently, we had to adjust our thinking and utilize some application knowledge regarding components and their relations with others. Nevertheless, in our experience, Sieve provides the developer with a good starting point to improve their workflows.
Lesson#2. Sieve is designed for "blackbox" monitoring of the evaluated application, where Sieve can collect and analyze generic system metrics that are exposed by the infrastructure (e.g., CPU usage, disk I/O, network bandwidth). However, in our experience, a system for monitoring and analyzing an application should also consider application-specific metrics (e.g., request latency, number of error messages) to build effective management tools. Fortunately, many microservices applications we analyzed already export such metrics. However, given the number of components and exported metrics, this fact can easily create an "information overload" for the application developers. In fact, the main motivation of Sieve was to deal with this "information overload". Our experience showed that Sieve can still monitor the application in the blackbox mode (i.e., no instrumentation to the application), but also overcome the barrage of application-specific metrics.
Lesson#3. To adapt to the application workload variations, Sieve needs to build a robust model for the evaluated application. This requires a workload generator that can stress-test the application thoroughly. To meet this requirement, there are three approaches: (1) In many cases the developers already supply an application-specific workload generator. For instance, we employed the workload generator shipped with the OpenStack distribution. (2) For cases where we did not have an existing workload generator, we implemented a custom workload generator for the evaluated application. For example, we built a workload generator for ShareLatex. Although we were able to faithfully simulate user actions in ShareLatex, such an approach might not be feasible for some applications. Having the ability to utilize existing production traces (e.g., by replaying the trace or by reproducing similar traces) or working in an online fashion to generate the model of the application would certainly help Sieve. Custom workload generation can then be used to close the gaps in the model for certain workload conditions not covered by the existing traces. (3) We could also explore some principled approaches for automatic workload generation, such as symbolic execution in distributed systems [33].
9 Conclusion and Future Work
This paper reports on our experiences with designing and building Sieve, a platform to automatically derive actionable insights from monitored metrics in distributed systems. Sieve achieves this goal by automatically reducing the amount of metrics and inferring inter-component dependencies. Our general approach is independent of the application, and can be deployed in an unsupervised mode without prior knowledge of the time series of metrics. We showed that Sieve’s resulting model is consistent, and can be applied for common use cases such as autoscaling and root-cause debugging.
An interesting research challenge for the future would be to integrate Sieve into the continuous integration pipeline of an application development. In this scenario, the dependency graph can be updated incrementally [36–38], which would speed up the analytics part. In this way, the developers would be able to get real-time profile updates of their infrastructure. Another challenge is to utilize already existing traffic to generate the dependency graph without requiring the developers to load the system. Using existing traffic would alleviate the burden of developers to supply a workload generator. On the other hand, existing traffic traces might not always capture the stress points of the application. A hybrid approach, in which workload generation is only used for these corner cases, might help to overcome this problem.
Additional results and software availability. A detailed technical report with additional experimental evaluation results is available online [84]. Finally, the source code of Sieve is publicly available: https://sieve-microservices.github.io/.
Acknowledgments. We would like to thank Amazon AWS for providing the required infrastructure to run the experiments.
References
C. Canali and R. Lancellotti. An Adaptive Technique To Model Virtual Ma-
P. Bhatotia.
Amemiya, Takeshi.
Proceedings of the 5th USENIX Symposium on Networked Systems
Server Provisioning and Load Dispatching for Connection-intensive Internet
Proceedings of the 2004 USENIX Annual Technical
Symposium on Computers and Communications (ISCC), 2011.
J. Paparrizos and L. Gravano. k-Shape: Efficient and Accurate Clustering of Time
J. Mace, R. Roelke, and R. Fonseca. Pivot tracing: Dynamic causal monitoring for
R. Lomax and D. Hahs-Vaughn.
K. P. F.R.S. LIII. On Lines and Planes of Closest Fit to Systems of Points in
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/47925071/Thalheim_et_al_2017_Sieve_Actionable_Insights.pdf", "len_cl100k_base": 14215, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 50144, "total-output-tokens": 18028, "length": "2e13", "weborganizer": {"__label__adult": 0.0003075599670410156, "__label__art_design": 0.0004575252532958984, "__label__crime_law": 0.0002868175506591797, "__label__education_jobs": 0.0015878677368164062, "__label__entertainment": 0.00013816356658935547, "__label__fashion_beauty": 0.00018417835235595703, "__label__finance_business": 0.0005946159362792969, "__label__food_dining": 0.00032901763916015625, "__label__games": 0.0007200241088867188, "__label__hardware": 0.001621246337890625, "__label__health": 0.0005583763122558594, "__label__history": 0.0004467964172363281, "__label__home_hobbies": 0.00014317035675048828, "__label__industrial": 0.0004916191101074219, "__label__literature": 0.0004515647888183594, "__label__politics": 0.0003018379211425781, "__label__religion": 0.0003972053527832031, "__label__science_tech": 0.17138671875, "__label__social_life": 0.00013208389282226562, "__label__software": 0.02886962890625, "__label__software_dev": 0.78955078125, "__label__sports_fitness": 0.00024068355560302737, "__label__transportation": 0.0005602836608886719, "__label__travel": 0.0002472400665283203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 78897, 0.025]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 78897, 0.2946]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 78897, 0.88914]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 6536, false], [6536, 13035, null], [13035, 18668, null], [18668, 25919, null], [25919, 33068, null], [33068, 39710, null], [39710, 43835, null], [43835, 46888, null], [46888, 49294, null], [49294, 54801, null], [54801, 62009, null], [62009, 69327, null], [69327, 74984, null], [74984, 78897, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 6536, true], [6536, 13035, null], [13035, 18668, null], [18668, 25919, null], [25919, 33068, null], [33068, 39710, null], [39710, 43835, null], [43835, 46888, null], [46888, 49294, null], [49294, 54801, null], [54801, 62009, null], [62009, 69327, null], [69327, 74984, null], [74984, 78897, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 78897, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 78897, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 78897, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 78897, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 78897, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 78897, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 78897, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 78897, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 78897, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 78897, null]], "pdf_page_numbers": [[0, 0, 1], [0, 6536, 2], [6536, 13035, 3], [13035, 18668, 4], [18668, 25919, 5], [25919, 33068, 6], [33068, 39710, 7], [39710, 43835, 8], [43835, 46888, 9], [46888, 49294, 10], [49294, 54801, 11], [54801, 62009, 12], [62009, 69327, 13], [69327, 74984, 14], [74984, 78897, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 78897, 0.1032]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
51cdd4e9b45ec168c6bd227f2e3c029cd8f08a36
|
Implementing Infopipes: The SIP/XIP Experiment
Calton Pu
Galen Swint
Charles Consel
Younggyun Koh
Ling Liu
See next page for additional authors
Authors
Calton Pu, Galen Swint, Charles Consel, Younggyun Koh, Ling Liu, Koichi Moriyama, Jonathan Walpole, and Wenchang Yan
This technical report is available at PDXScholar: https://pdxscholar.library.pdx.edu/compsci_fac/40
Implementing Infopipes: The SIP/XIP Experiment
Calton Pu¹, Galen Swint¹, Charles Consel², Younggyun Koh¹, Ling Liu¹, Koichi Moriyama³, Jonathan Walpole⁴, Wenchang Yan¹
Abstract
We describe an implementation of the Infopipe abstraction for information flow applications. We have implemented software tools that translate the SIP/XIP variant of Infopipe specification into executable code. These tools are evaluated through the rewriting of two realistic applications using Infopipes: a multimedia streaming program and a web source combination application. Measurements show that Infopipe-generated code has the same execution overhead as the manually written original version. Source code of Infopipe version is reduced by 36% to 85% compared to the original.
1 Introduction
One of the fundamental functions of operating systems (OS) is to provide a higher level of programming abstraction on top of hardware to application programmers. More generally, an important aspect of OS research is to create and provide increasingly higher levels of programming abstraction on top of existing abstractions. Remote Procedure Call (RPC) [1] is a successful example of such abstraction creation on top of messages, particularly for programmers of distributed client/server applications.
We have proposed the Infopipe concept [16, 10, 9, 2] as a high level abstraction to support information-flow applications. Unlike RPC, which has clearly defined procedural semantics, Infopipe can have several flavors, depending on the kind of application for which it is being specialized. Examples include data streaming and filtering [10] and multimedia streaming [2]. In this paper, we describe the SIP/XIP variant of Infopipe currently under development at Georgia Tech, the software tools that implement SIP/XIP, and experiments that evaluate the concept as well as software tools.
The main contributions of this paper are the implementation of software tools for SIP/XIP Infopipes and an experimental evaluation. From a top-down perspective, our software tools consist primarily of a series of translators that successively creates appropriate abstract machine code from the previous higher level abstraction. Our experiments show that the execution overhead of SIP/XIP-generated code is minimal compared to a hand-written version (on the order of a few percent), but the gains in code simplicity are substantial (code size reduction between 36% and 85% of representative applications).
The rest of the paper is organized as follows. Section 2 summarizes the Infopipe abstraction. Section 3 outlines the implementation strategy. Section 4 describes the experimental evaluation results. Section 5 summarizes related work and Section 6 concludes the paper.
2 The Infopipe Abstraction
2.1 Background and Motivation
Remote procedure call (RPC) is a well-established mechanism for constructing distributed systems and applications, and a considerable amount of distrib-
¹ Center for Experimental Research in Computer Systems (CERCS), Georgia Institute of Technology, Atlanta, Georgia. {firstname.lastname}@cc.gatech.edu
² INRIA/LaBRI/ENSEIRB, Bordeaux, France. {consel@labri.fr}
³ Sony Corporation, Tokyo, Japan. This author’s work was done during an extended visit to Georgia Tech.
⁴ OGI School of Science & Engineering, OHSU, Portland, Oregon. {walpole@cse.ogi.edu}
uted systems research has centered on it. RPC is based on the procedure call abstraction which raises the level of abstraction for distributed systems programming beyond raw message passing and naturally supports a request-response style of interaction that is common in many applications. The widespread use and acceptance of RPC has led to the development of higher-level architectural models for distributed system construction. For example, it is a cornerstone for models such as client/server, DCOM, and CORBA. The client/server model is widely considered to be a good choice for building practical distributed applications, particularly those using computation or backend database services.
On the other hand, while these models have proven successful in the construction of many distributed systems, RPC and message passing libraries offer limited support for information-driven applications. One example is bulk data transfers [6]. Another example is when information flows are subject to real-world timing constraints certain elements of distribution transparency - an often-cited advantage of RPC - can cause more problems than they solve. For example, restrictions on the available bandwidth or latency over a network link between two components of a media-streaming application are a serious concern and should not be hidden by the programming abstraction. Similarly, the reliability and security-related characteristics of a connection may be significant to applications that are streaming critical or sensitive information.
Several important emerging classes of distributed applications are inherently information-driven. Instead of occasionally dispatching remote computations or using remote services, such information-driven systems tend to transfer and process streams of information continuously (e.g., Continuous Queries [11, 12]). Member of this class range from applications that primarily transfer information over the wires such as digital libraries, teleconferencing and video on demand, to applications that require information-intensive processing and manipulation, such as distributed multimedia, Web search and cache engines. Other applications such as electronic commerce combine heavy-duty information processing (e.g., during the discovery and shopping phase, querying a large amount of data from a variety of data sources [18]) with occasional remote computation (e.g., buying and updating credit card accounts as well as inventory databases).
We argue that an appropriate programming paradigm for information-driven applications should embrace information flow as a core abstraction and offer the following advantages over RPC. First, data parallelism among flows should be naturally supported. Second, the specification and preservation of QoS properties should be included. And third, the implementation should scale with the increasing size, complexity and heterogeneity of information-driven applications. We emphasize that such a new abstraction offers an alternative that complements RPC, not to replace it. In client/server applications, RPC is clearly the natural solution.
2.2 The Infopipe Abstraction
We have proposed the Infopipe concept [16, 10, 9, 2] as an abstraction for capturing and reasoning about information flow in information-driven applications. Intuitively, an Infopipe is the information dual of an RPC. Like RPCs, Infopipes raise the level of abstraction for distributed systems programming and offer certain kinds of distribution transparency. Beyond RPCs, Infopipe is specified by the syntax, semantics, and quality of service (QoS) properties. Examples of QoS properties include the quality, consistency, reliability, security and timeliness of the information flowing through Infopipes. In this paper, we only include enough description of Infopipes to make this paper self-contained. Many important Infopipe features such as QoS properties and restructuring of Infopipe (topics of active research) are beyond the scope of this paper.
A simple Infopipe has two ends – a consumer (input) end and a producer (output) end – and implements a unidirectional information flow from a single producer to a single consumer. The processing, buffering, and filtering of information happen in the middle of the Infopipe, between the two ends. As mentioned before, an Infopipe links information producers to consumers. The information producer exports an explicitly defined information flow, which goes to the input end of the Infopipe. After appropriate transportation, storage, and processing,
the information flows through the output end to the information consumer.
Infopipe is a language and system independent mechanism to process information in a distributed system. This is done on purpose since one of the main reasons for RPC’s success among practical imperative programming languages is their universal adoption of the procedure call abstraction. As a consequence, stub generators are able to hide the tedious details of marshalling and unmarshalling parameters for all practical languages. There are two additional sources of problems in the implementation of stub generators: (1) the heterogeneity of operating systems and hardware, and (2) the translation between the language level procedure call abstraction and the underlying system level message-based implementation. The eventual definition of an Interface Description Language (IDL) solved both problems, by encapsulating the translation functions in a portable IDL compiler.
Our approach to making Infopipes language and system independent parallels that used in RPC. We define a generic interface for Infopipe manipulation, and use the equivalent of IDL and stub generators to hide the technical difficulties of marshalling and unmarshalling data and manipulating system-specific mechanisms for QoS property enforcement. By adopting this approach we shield the application developer from the complexity of heterogeneous operating systems and hardware and the translation from language-level abstractions to underlying message-based implementations.
2.3 Infopipe Specification Language
The specification of Infopipe is divided into three components: syntax, semantics, and QoS properties. The software that wraps the first two components corresponds directly to RPC stub generators, since an Infopipe Specification Language compiler can generate the plumbing code so Infopipe programmers don’t have to write code to manipulate the explicit representation and description of an Infopipe.
Between its consumer and producer ends, an Infopipe is a one-way mapping that transforms information units from its input domain to the output range. Probably it is not surprising to the reader that there are many concrete examples of existing information flow software. A familiar example is the Unix filter programs. Combining filters we have a Unix pipeline, which is a precursor of the Infopipe programming style. Another concrete example of information flow manipulation language is SQL in relational databases.
In this paper, we use the SIP (for Specifying InfoPipes) variant of Infopipe Specification Languages. SIP is a domain-specific language being developed at Georgia Institute of Technology to support information flow applications. SIP is a generic Infopipe specification language that supports a number of communications abstract machines, including the ECho publish/subscribe messaging middleware and the common TCP socket/RPC invocations. Since our focus is on the implementation and evaluation, we omit the language definition and include examples in the Appendix as illustration. From the system point of view, SIP is similar to other domain-specific languages such as Devil [14] for writing device drivers. SIP encapsulates domain knowledge (in this case, distributed information flow applications and communications mechanisms) so the applications written in SIP can be more concise and portable.
Composition of Infopipes is an active area of research and space constraints limit the number of experiments in this paper. In Section 4.4, we outline an experiment with a simple serial composition of Infopipes in an application that combines information from several web sources. This small experiment only illustrates the potential interesting problems in the area of Infopipe composition.
3 Implementation Outline
3.1 Implementation Strategy
Our design of software tools to translate SIP into executable code consists of two steps. First, SIP is translated into an intermediate representation, called XML Infopipe Specification Language (XIP). Then, XIP is translated into executable code using one of the communications abstraction machines. There are three main reasons for this intermediate representation and translation steps.
First, we are planning for several variants of Infopipe Specification Language, of which SIP is just one instance. This is an area of active research,
particularly from the domain specific language point of view. Each variant may also evolve over time, as new functionality is added. Instead of trying to create and maintain different software tools for each variant of Infopipe Specification Language, we decided to create a standard extensible intermediate representation based on XML (XIP). This way, the second step (the actual code generation) can be developed in parallel to the design and evolution of the variants of Infopipe Specification Languages.
Second, we are planning the generation of code for several communications abstract machines. The experiments described below use a publish/subscribe event messaging mechanism called ECho. A standard format such as XIP simplifies the addition of new abstract machines for the code generator. We also have implemented a prototype version that translates XIP into RPC and sockets, which have lower overhead for message exchanges.
Third, we will be attaching a variety of metadata to the data stream being carried by Infopipes. This metadata includes data provenance annotations (e.g., when and where the information was generated, and how it was processed) and other data processing instructions (e.g., filtering algorithms that understand the semantics of this particular data stream). Further discussion of the metadata issue is beyond the scope of this paper, but it is an important reason for the XIP standard format.
Currently, the first step of SIP translation (into XIP) is done by hand. This is primarily due to the fast evolution of SIP. The second step (from XIP into executable) is described in the following section.
### 3.2 Code Generation Process
We skip the details of XIP in this paper, since it is an intermediate representation invisible from the programmer’s point of view. Furthermore, XIP is used only during code generation and therefore contributes little to the run-time overhead, the other major concern of this paper. At the risk of oversimplification, XIP can be described as a union of all variants of Infopipe Specification Languages. By union we mean combined functionality from these variants, not syntax. We chose XML due to its extensibility, capable of handling all the three aspects of an Infopipe (syntax, semantics, and QoS). Even though XML was originally designed as a data interchange format, not an intermediate representation, it has worked very well so far.
The translation of XIP into executable code is accomplished through a series of transformations on the XIP specification of Infopipe. For convenience, we call these internal representations XIP+\(k\), where \(k\) is the number of stage in the series. The input files for the XIP translator are the XIP specification of Infopipe and the abstract machine description (executable code templates) file.
- The main transformation from XIP to XIP+1 is the explicit naming of all inputs and outputs, by using the information in the XIP file and the abstraction machine description.
- The transformation from XIP+1 to XIP+2 is the flattening of composite Infopipes into elementary Infopipes (with one input and one output) plus the syntactic data types, data filters, and aspect [7] (e.g., end-to-end latency management) templates.
- From XIP+2 to XIP+3, the aspects doing work are filled in, while the unnecessary aspects are removed.
- From XIP+3 to XIP+4, the aspects are woven together and the templates are used to generate executable code from XIP+4 and the abstraction machine description file.
In the current implementation, we generate code for two concrete communications abstract machines: (1) the ECho publish/subscribe messaging facility, and (2) the popular Unix sockets interface. Also, the translation process from XIP to XIP+4 is in main memory for performance reasons. The transformation algorithms are designed so each stage can write XIP+\(k\) to disk to accommodate arbitrarily large XIP descriptions.
### 4 Experimental Evaluation
#### 4.1 The Statistical Treatment
Many system components are involved in the measurement of software systems such as ours, with variations being introduced by the hardware (e.g., cache misses), OS kernel variations (e.g., schedul-
scheduling and memory management decisions), and network (e.g., very short temporary interferences with other nodes). This is particularly the situation with I/O operations such as Infopipes. Some operations (e.g., Infopipe initialization) cannot be repeated many times in a warmed cache to reduce variance, since their normal mode of operations is an execution without warmed cache.
Therefore, we took some care in our evaluation to clarify the interpretation of measured results. We are using a simple statistical treatment called two-sample t-test, where the mean of two sets of measurement results are compared. We assume two independent sets of random samples, each consisting of independent and identically distributed random variables. Our null hypothesis is that the means from the two samples are equal, i.e., the difference between the two sets of measurements is statistically not meaningful. To decide whether to accept or reject the null hypothesis, we put the \( t\)-statistic (derived from the two samples) into a student-t distribution and adopt 95% confidence interval in the test. For most of the experiments, the sample size was 100 (the same experiment was run 100 times).
### 4.2 Microbenchmarks
The first set of experiments consists of microbenchmarks to evaluate the overhead of Infopipe basic functions. The hardware used in the experiments is a pair of Dell dual-CPU workstations with Pentium III (800MHz, 512MB, 256KB L1 cache) running Linux 2.4.9-smp. The two machines are connected through a lightly loaded 100Mb Ethernet and sharing the same file system.
The first microbenchmark measures the overhead of transmitting one single integer, repeated 100,000 times. (As mentioned above, each test is repeated 100 times and the mean of the result compared.) This can be seen as the worst case scenario that maximizes the transmission overhead. For the results below, we see that obviously sockets carry lower overhead than ECho.
<table>
<thead>
<tr>
<th>Single Integer</th>
<th>Mean Time</th>
<th>Std. Dev.</th>
</tr>
</thead>
<tbody>
<tr>
<td>ECho/Infopipe</td>
<td>2.0 sec</td>
<td>0.015 sec</td>
</tr>
<tr>
<td>TCP socket</td>
<td>0.12 sec</td>
<td>0.003 sec</td>
</tr>
</tbody>
</table>
The second microbenchmark measures the overhead of transmitting 1000 integers, repeated 10,000 times. (As mentioned above, each test is repeated 100 times and the mean of the result compared.) This can be seen as the normal case for bulk transmissions. For the results below, we have a \( t\)-statistic of 146.6, so even though the ECho version is only slightly slower than sockets (about 2% difference), the difference is statistically significant at 95% confidence interval. Intuitively, the small difference is significant because the measurements have been very precisely reproducible (with standard deviations that are one order of magnitude smaller than the difference in response time).
<table>
<thead>
<tr>
<th>1000 Integers</th>
<th>Mean Time</th>
<th>Std. Dev.</th>
</tr>
</thead>
<tbody>
<tr>
<td>ECho/Infopipe</td>
<td>3.46 sec</td>
<td>0.004 sec</td>
</tr>
<tr>
<td>TCP sockets</td>
<td>3.39 sec</td>
<td>0.003 sec</td>
</tr>
</tbody>
</table>
In these microbenchmarks, an obvious experiment would be the comparison between the Infopipe-generated code using ECho and manually written ECho code, or a similar comparison using TCP sockets. Since the code and the measured results are the same, we omit them here. See the next Section for similar results.
### 4.3 Data Streaming Experiment
Our first system level experiment is an evaluation of Infopipes for a multimedia streaming application. This application is representative of many distributed information flow applications where bulk data streaming happens. Our application has real-time requirements (unprocessed bits drop on the floor) that are implemented by quality of service (QoS) support. Although QoS is an integral part of Infopipe research, it is a complex topic. We will report on Infopipe support for QoS in a paper dedicated to that topic. In this paper, we focus on the effectiveness of Infopipe as a high level abstraction for information flow applications.
The evaluation consists of two parts. The first part is a comparison of measured overhead of two versions of the application: the original version was hand-written and the Infopipe version is the same application written using SIP/XIP Infopipes. This is a refinement of the microbenchmarks in Section 4.1, and shows the effectiveness of our implementa-
tion in a realistic scenario. The second part is a comparison of the source code length between the original version and the Infopipe version. This is an evaluation of the effectiveness of the Infopipe abstraction for the programming of information flow applications.
The multimedia streaming application is a medium-sized demonstration program being developed for DARPA’s PCES program. The program includes contributions from several universities and is integrated by BBN. The current version of the program (successfully integrated and demonstrated in April 2002) gathers video input from several sources, processes them, and redistributes the video streams to several destinations including video displays and automatic target recognition programs. Although the program contains significant technical innovation such as quality of service control, in this experiment we focus on the effect of Infopipe abstraction in terms of performance overhead and code size.
The experiment consists of taking the original application code and rewriting it using Infopipes for information flow processing. Both the original version and the Infopipe version use the same publish/subscribe communications middleware called ECho [5]. The video streams are 320X240 pixels, 24-bit color depth raw images in the Unix Portable Pixmap (PPM) format.
The measurements were conducted on a Dell laptop (700 MHz Pentium III, 256 MB memory) running Linux 2.4.2. The following table shows the measured overhead of ECho channel initialization time for both versions. We ran the program 100 times with a cold start initialization (new process). The statistical tests show a significant difference for the initialization time (t-statistic = -8.96). The small difference is due to minor differences in the code generated.
<table>
<thead>
<tr>
<th>Initialization</th>
<th>Mean Time</th>
<th>Std. Dev.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Original</td>
<td>26.0 ms</td>
<td>0.4 ms</td>
</tr>
<tr>
<td>Infopipe</td>
<td>28.2 ms</td>
<td>2.4 ms</td>
</tr>
</tbody>
</table>
We also measured the time it takes to transfer a frame (the steady state). The table below shows the measured overhead as mean over 100 runs.
For the quantitative source code evaluation, we restricted our attention to the 1182 lines (not including blanks and comments) in 5 source files that refer to video streams, at both sender and receiver. The application consists of approximately 15,000 lines of code, using many significant and relatively large middleware packages such as ECho (publish/subscribe messaging middleware). From these files, 441 lines are closely related to ECho. The application was rewritten using Infopipes (SIP) and hand-translated into XIP. The source code for this experiment is included in Appendix 7.1.
While the code savings are potentially better with SIP, a domain-specific language designed to support information flow, we decided to compare primarily with XIP. Although XIP is more verbose, it is also more “general-purpose” in its coverage of many flavors of Infopipe specification languages. Consequently, it is more directly comparable with the original hand-written code. This comparison also becomes independent of specific Infopipe Specification Language syntax. Comparing the XIP version to the original version, 12 lines were added, 171 lines were removed, and 37 lines were changed. The following table summarizes the change process from the original version to the Infopipe version. The result is the elimination of about 36% of the original source code related to information flow (in lines of code – loc).
<table>
<thead>
<tr>
<th>Infopipe-Related</th>
<th>Code Added</th>
<th>Code Removed</th>
<th>Code Modified</th>
</tr>
</thead>
<tbody>
<tr>
<td>441 loc</td>
<td>12 loc</td>
<td>171 loc</td>
<td>37 loc</td>
</tr>
</tbody>
</table>
### 4.4 Web Source Composition Experiment
Our second system level experiment is an evaluation of Infopipes for a web information processing application. It takes an address, fetches a map for
that address, and filters the map for display on a personal digital assistant (PDA) with limited resolution, capability (e.g., grayscale only), and network bandwidth. This is an application that could be written using an “agent” style of programming. The control passes from site to site, gathering information or processing and filtering the information. Eventually it produces a useful result.
Instead of using a control-driven model such as agents, we model the application as an information flow, which is implemented using Infopipes. Although we no longer have “agents” visiting different sites, the information flow goes through the appropriate sites and the information is augmented, processed, filtered, and transformed along the way. The result is useful information at the end of the composite Infopipe.
The concrete implementation of the application has four main components. At the beginning is a GUI with a wrapper to translate its output into an XML format. The GUI collects the user input (address) and through the wrapper sends it to the first stage of information pipeline, GetMapURL, which sends the address to MapQuest for translation. MapQuest sends back the URL of a color map. The URL is passed to the second stage of information pipeline, GetMapImage, which fetches the map (also from MapQuest in this particular case). Once GetMapImage receives the map, it passes the data to the third stage of the information pipeline, ImageConverter, which filters the image to an appropriate grayscale image of appropriate resolution for the PDA. At the end, ImageConverter sends the results back to the GUI running on the PDA, which then displays the grayscale image.
We also divided this experiment into two parts. Since there is no sustained data transfer, the first part (execution overhead) was done on the latency of application execution. We used the same desktops described in Section 4.2 with the same configuration (single machine). The GUI was run as a PalmOS application on the PalmOS Emulator 3.0a7.
For this kind of applications, the latency measurements are usually dominated by network access times. In addition, since there are external accesses (e.g., twice to MapQuest.com), it is difficult to reproduce measured results. Despite the large variances, the Mean measured latencies (over 10 executions) of the two versions show no statistically significant difference.
<table>
<thead>
<tr>
<th>Latency</th>
<th>Mean</th>
<th>Std. Dev.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Hand-written</td>
<td>6.18 sec</td>
<td>0.12 sec</td>
</tr>
<tr>
<td>Infopipe</td>
<td>6.22 sec</td>
<td>0.15 sec</td>
</tr>
</tbody>
</table>
The second part of the experiment, quantitative code comparison, showed a more dramatic code reduction. This is due to the repeated I/O management code in each stage of information pipeline plus data, socket, and XML handling code, when XML data streams must be parsed and interpreted. By generating these “bureaucratic” code segments automatically, the Infopipe version is able to reduces the code count to about 15% of the hand-written code size.
<table>
<thead>
<tr>
<th>Web Compos.</th>
<th>Hand-written</th>
<th>Infopipe</th>
</tr>
</thead>
<tbody>
<tr>
<td>GetMapURL</td>
<td>95 loc</td>
<td>19 loc</td>
</tr>
<tr>
<td>GetMapImage</td>
<td>104 loc</td>
<td>28 loc</td>
</tr>
<tr>
<td>ImageConverter</td>
<td>121 loc</td>
<td>43 loc</td>
</tr>
<tr>
<td>House-keeping</td>
<td>507 loc</td>
<td>26 loc</td>
</tr>
<tr>
<td>Total</td>
<td>827 loc</td>
<td>116 loc</td>
</tr>
</tbody>
</table>
5 Related Work
Remote Procedure Call (RPC) [1] is the basic abstraction for client/server software. By raising the level of abstraction, RPC facilitated the programming of distributed client/server applications. For example, RPC automates the marshalling and unmarshalling of procedure parameters, a tedious and maintenance-heavy process. Despite its usefulness, RPC provides limited support for information flow applications such as data streaming, digital libraries, and electronic commerce. To remedy these problems, extensions of RPC such as Remote Pipes [6] were proposed to support bulk data transfers and sending of incremental results.
Instead of trying to extend further RPC-style abstractions, which provide convenient building blocks for the programming of distributed computations, Infopipes can be seen as a complementary
abstraction to RPC and its derivatives. For distributed data streaming, for example, Infopipes provide good abstractions for distributed information flow with “local” computations (as filters within Infopipes).
6 Conclusion
In this paper, we briefly motivate and summarize the concept of Infopipe [16, 10, 9, 2] to support distributed information flow applications. Then, we describe the implementation of the SIP variant of Infopipe Specification Languages. The implementation translates SIP into an XML-based intermediate representation called XIP, which is then stepwise transformed into executable code.
We used one set of microbenchmarks and two realistic applications for an experimental evaluation of the SIP/XIP software tools. The measurement results show that the run-time overhead of generated Infopipe is comparable to the manually written code. For example, statistically there is no difference for steady state data transfers, with only 7% additional overhead for Infopipe initialization.
The evaluation of the source code for these experiments shows a significant reduction in number of lines of code. The Infopipe code is 36% smaller than the original code for the multimedia streaming application and reduced to only 15% of the original code for the web source combination application. The declarative nature of SIP/XIP also makes it easier to write and maintain the Infopipe version of these applications (see Appendix Section 7 for a direct comparison).
These experiments show the promise of the Infopipe approach and the advantages of our SIP/XIP implementation strategy. We are moving forward with the addition of QoS support and the application of program specialization techniques [13, 15, 17] to improve the performance of generated code.
Funding Acknowledgements
This work was done as part of the Infosphere project, funded by DARPA through the Information Technology Expeditions, Ubiquitous Computing, Quorum, and PCES programs. The research was also partially funded by NSF’s CISE directorate, through the ANIR and CCR divisions. In addition, the research was partially funded by Intel.
References
9. R. Koster, A. Black, J. Huang, J. Walpole and C. Pu, “Infopipes for Composing Distributed Information Flows”. In the Proceedings of the
7 Appendices
In the source code attached below, we use the normal font (between sizes 11 and 9) to show areas of interest. When code is irrelevant for our comparison purposes, we reduce it to an illustrative size (sizes 4 or smaller) to show that the code is there, but it is not part of our comparison.
7.1 SIP Example – Multi-UAV Application
The first step in integrating an infopipe into an application is to write the SIP specification for it. This involves creating declarations for data types, filters, and pipes. Data types are built out of primitive types which roughly mirror types available in C. Eventual plans are to mirror the types available as part of the SOAP specification.
Our contribution to the Multi-UAV demo involves generating the communication code from the sending process to the player, which displays the images as a movie. In this case, we have two infopipes which are composed together. The first infopipe makes the data available on the network, and the second infopipe delivers the data to the player. The data type for the exchange is also specified in SIP. The application relies on several filters to process the information before transmission to reduce network load. These can be defined by name and referenced in the specification of an infopipe.
The SIP version of the Multi-UAV demo:
Instead of showing the C code generated from XIP, we include here the original version (also written in C) of the Multi-UAV application program. It is substantially similar to the one generated by XIP (as demonstrated in measured overhead in Section 4.3) and it shows the difference in code quantity and quality as discussed in that section. As can been seen below, there is a lot of code devoted to creating connections and initializing the environment. The areas of large text indicate code that is replaced by the generated code.
```c
/* avs_raw.h */
typedef struct {
int tag;
char ppm1;
char ppm2;
int size;
int width;
int height;
char *buff;
} Raw_data, *Raw_data_ptr;
extern IOField Raw_data_fld[];
/* 1 - 640 * 480 ********/**
#define AVSIMAGE1C 921600
/* 640 * 480 - color */
typedef struct {
int tag;
char ppm1;
char ppm2;
int size;
int width;
int height;
int maxval;
char buff[AVSIMAGE1C];
} Raw_data1C, *Raw_data1C_ptr;
extern IOField Raw_data1C_fld[];
#define AVSIMAGE1G 307200
/* 640 * 480 - grey */
typedef struct {
int tag;
char ppm1;
char ppm2;
int size;
int width;
int height;
int maxval;
char buff[AVSIMAGE1G];
} Raw_data1G, *Raw_data1G_ptr;
extern IOField Raw_data1G_fld[];
#define AVSIMAGE1DC 1843200 /* 640 * 480 * 2 - color */
typedef struct {
int tag;
char ppm1;
char ppm2;
int size;
int width;
int height;
int maxval;
char buff[AVSIMAGE1DC];
} Raw_data1DC, *Raw_data1DC_ptr;
extern IOField Raw_data1DC_fld[];
#define AVSIMAGE1DG 614400 /* 640 * 480 * 2 - grey */
typedef struct {
int tag;
char ppm1;
char ppm2;
int size;
int width;
int height;
int maxval;
char buff[AVSIMAGE1DG];
} Raw_data1DG, *Raw_data1DG_ptr;
extern IOField Raw_data1DG_fld[];
#define AVSIMAGE4DC 1843200 /* 80 * 60 * 2 - color */
typedef struct {
int tag;
char ppm1;
char ppm2;
int size;
int width;
int height;
int maxval;
char buff[AVSIMAGE4DC];
} Raw_data4DC, *Raw_data4DC_ptr;
extern IOField Raw_data4DC_fld[];
/* Some other data format */
typedef struct {
int tag;
char ppm1;
char ppm2;
int size;
int width;
int height;
int maxval;
char buff[AVSIMAGE3G];
} Raw_data3G, *Raw_data3G_ptr;
extern IOField Raw_data3G_fld[];
extern IOField Raw_data3DG_fld[];
#if HAVE_CONFIG_H
# include <config.h>
#endif
#include <stdio.h>
#include <assert.h>
extern IOField Raw_data1DG_fld[] = {
{"tag","integer",sizeof(int),IOOffset(Raw_data1DG_ptr,tag),},
{"ppm1","char",sizeof(char),IOOffset(Raw_data1DG_ptr,ppm1),},
{"ppm2","char",sizeof(char),IOOffset(Raw_data1DG_ptr,ppm2),},
{"size","integer",sizeof(int),IOOffset(Raw_data1DG_ptr,size),},
{"width","integer",sizeof(int),IOOffset(Raw_data1DG_ptr,width),},
{"height","integer",sizeof(int),
IOOffset(Raw_data1DG_ptr,height),},
{"buff", IOArrayDecl(char,AVSIMAGE1DG), sizeof(char),
IOOffset(Raw_data1DG_ptr,buff[0]),},
{NULL,NULL},
};
/* 2 - 320 * 240 ***********/
extern IOField Raw_data2C_fld[] = {
{"tag","integer",sizeof(int),IOOffset(Raw_data2C_ptr,tag),},
{"ppm1","char",sizeof(char),IOOffset(Raw_data2C_ptr,ppm1),},
{"ppm2","char",sizeof(char),IOOffset(Raw_data2C_ptr,ppm2),},
{"size","integer",sizeof(int),IOOffset(Raw_data2C_ptr,size),},
{"width","integer",sizeof(int),IOOffset(Raw_data2C_ptr,width),},
{"height","integer",sizeof(int),
IOOffset(Raw_data2C_ptr,height),},
{"buff", IOArrayDecl(char,AVSIMAGE2C), sizeof(char),
IOOffset(Raw_data2C_ptr,buff[0]),},
{NULL,NULL},
};
extern IOField Raw_data2G_fld[] = {
{"tag","integer",sizeof(int),IOOffset(Raw_data2G_ptr,tag),},
{"ppm1","char",sizeof(char),IOOffset(Raw_data2G_ptr,ppm1),},
{"ppm2","char",sizeof(char),IOOffset(Raw_data2G_ptr,ppm2),},
{"size","integer",sizeof(int),IOOffset(Raw_data2G_ptr,size),},
{"width","integer",sizeof(int),IOOffset(Raw_data2G_ptr,width),},
{"height","integer",sizeof(int),
IOOffset(Raw_data2G_ptr,height),},
{"buff", IOArrayDecl(char,AVSIMAGE2G), sizeof(char),
IOOffset(Raw_data2G_ptr,buff[0]),},
{NULL,NULL},
};
/* avs_raw.c */
/* avs_raw2.c */
/* avs_source.c */
/* some global variables, helper
functions omitted ... */
int main(argc, argv)
int argc;
char* argv[];
{
/* ... */
/*Creation of channel and
registration*/
gen_jobThread_init();
mem = CManager_create();
CMfork_comm_thread(mem);
if (signal(SIGINT, interruptHandler) == SIG_ERR)
Styx_errQuit("Signal error");
egc = ECho_CM_init(mem);
chan2C = EChannel_typed_create(ec,
Raw_data2C_fld, NULL);
if (chan2C == NULL)
Styx_errQuit("Failed to create 2C
channel.
");
sourceHandle2C =
ECsource_typed_subscribe(chan2C,
Raw_data2C_fld, NULL);
chan = EChannel_typed_create(ec,
Raw_data_fld, NULL);
if (chan == NULL)
Styx_errQuit("Failed to create
channel.
");
printf(stdout, "Echo channel ID:
\n ", ECglobal_id(chan));
sourceHandle =
ECsource_typed_subscribe(chan,
Raw_data_fld, NULL);
printf(stdout, "Echo 2C (320x240-
color) channel ID:
\n ", ECglobal_id(chan2C));
printf(stdout, "\n ");
if (debugging) {
sprintf(shotsentFile, "shotsent%d.ppm", i+1);
debuggingfd = open(shotsentFile,
O_CREAT|O_WRONLY, S_IRWXU|S_IRGRP|S_IROTH);
sprintf(header, "\n ", rawrec2CP->ppm1, rawrec2CP->ppm2,
rawrec2CP->width, rawrec2CP->height, 255);
write(debuggingfd, header, sizeof(header));
if (rawrec2CP->buff != NULL)
write(debuggingfd, rawrec2CP->buff, rawrec2CP->size);
close(debuggingfd);
}
free(rawrecP->buff);
else {
debugrecP->tag = i+1;
ECsubmit_typed_event(sourceHandle, debugrecP);
++NumRecSubmitted;
/*^^^*/
if (MeasureMe)
if (NumRecSubmitted == reported * reportFreq) {
bwHist = (double*) malloc (((int) (NumRecSubmitted/Freq)) *
sizeof(double));
sensNet_GetHistory(bwHist, (int) (NumRecSubmitted/Freq), SENSN
if (bwHist == NULL) Styx_errQuit("Not enough resources.
");
fprintf(stdout, "Bandwidth mean: \n ", Stats_mean (bwHist, (int) (NumRecSubmitted/Freq)),
Stats_sdev (bwHist, (int) (NumRecSubmitted/Freq)));
free(bwHist);
++reported;
}
} /* end for */
dumpStats();
if (MeasureMe) sensNet_Fini
//EChannel_destroy(chan); sh();
//CManager_close(cm);
exit (EXIT_SUCCESS);
} /* main */
7.2 Web Source Combination Application
In the java version of SIP/XIP Infopipe, all the data
flowing through Infopipes are XML-formatted. For
example, there is a "mapImage" data type for
containing data of a map image. The data format of
mapImage exchanged between infopipes will be
like this.
```
<infopipeDataFormat version="0.1">
<dataType type="mapImage">"contentOfImage_</dataType>
<contentTransferEncoding>"EncodingType_
</contentTransferEncoding>
<contentBody>_Body_</contentBody>
</dataType>
</infopipeDataFormat>
```
Each Infopipe parses the XML data, and generates
another XML-formatted data stream after processing.
Without Infopipes, programmers need to add
the parsing and generating code as shown below:
```java
public void parseXML(Reader in) throws
Exception
{ InputSource inputSource = new InputSource(in);
DOMParser parser = new DOMParser();
try {
parser.parse(inputSource);
catch (IOException ioe) {
ioe.printStackTrace();
throw new Exception(ioe.getMessage());
} catch (SAXException se) {
se.printStackTrace();
throw new Exception(se.getMessage());
}
Node root = parser.getDocument().getDocumentElement();
Node dataNode = null;
Node currNode = null;
NodeList nodeList = null;
try {
dataNode = XPathAPI.selectSingleNode(root,
"dataContent");
if (dataNode == null ||
((Element) dataNode).getAttribute("type").equals(
"mapImage"))
```
{ throw new Exception("Invalid data"); }
currNode = null;
currNode = XPathAPI.selectSingleNode(dataNode, "contentType/text()");
if (currNode != null) {
if (currNode.getNodeType() == Node.TEXT_NODE)
contentType = currNode.getNodeValue();
else {
throw new Exception();
}
} else {
throw new Exception("Invalid data");
}
currNode = XPathAPI.selectSingleNode(dataNode, "contentTransferEncoding/text()");
if (currNode != null) {
if (currNode.getNodeType() == Node.TEXT_NODE)
contentTransferEncoding = currNode.getNodeValue();
else {
throw new Exception();
}
} else {
throw new Exception("Invalid data");
}
currNode = XPathAPI.selectSingleNode(dataNode, "contentBody/text()");
if (currNode != null) {
if (currNode.getNodeType() == Node.TEXT_NODE)
contentBody = currNode.getNodeValue();
else {
throw new Exception();
}
} else {
throw new Exception();
}
public String formatToXML() {
String doc = new String();
doc = "<infopipeDataFormat version="0.1""
dataContent="mapImage">
if (contentType == null) {
doc = doc + "<contentType></contentType>";
} else {
doc = doc + "<contentType>" +
contentType + "</contentType>";
}
if (contentTransferEncoding == null) {
doc = doc + "<contentTransferEncoding>
</contentTransferEncoding>";
} else {
doc = doc + "<contentTransferEncoding>" +
contentTransferEncoding + "</contentTransferEncoding>";
}
if (contentBody == null) {
doc = doc + "<contentBody></contentBody>";
} else {
doc = doc + "<contentBody>" +
contentBody + "</contentBody>";
}
doc = doc + "</dataContent>
</infopipeDataFormat>";
return doc;
} // End of the code
Using SIP/XIP, we can replace the above code with only 5 lines:
<dataDef name="mapImage">
<arg type="string" name="contentType"/>
<arg type="string" name="contentTransferEncoding"/>
<arg type="string" name="contentBody"/>
</dataDef>
|
{"Source-Url": "https://pdxscholar.library.pdx.edu/cgi/viewcontent.cgi?article=1039&context=compsci_fac", "len_cl100k_base": 9692, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 41629, "total-output-tokens": 11170, "length": "2e13", "weborganizer": {"__label__adult": 0.0003552436828613281, "__label__art_design": 0.0002627372741699219, "__label__crime_law": 0.0002815723419189453, "__label__education_jobs": 0.00033736228942871094, "__label__entertainment": 6.693601608276367e-05, "__label__fashion_beauty": 0.0001392364501953125, "__label__finance_business": 0.00017070770263671875, "__label__food_dining": 0.00032448768615722656, "__label__games": 0.00044345855712890625, "__label__hardware": 0.0016527175903320312, "__label__health": 0.00047397613525390625, "__label__history": 0.0002627372741699219, "__label__home_hobbies": 7.545948028564453e-05, "__label__industrial": 0.0003905296325683594, "__label__literature": 0.00020253658294677737, "__label__politics": 0.0002397298812866211, "__label__religion": 0.0004799365997314453, "__label__science_tech": 0.02813720703125, "__label__social_life": 7.623434066772461e-05, "__label__software": 0.006710052490234375, "__label__software_dev": 0.9580078125, "__label__sports_fitness": 0.00026726722717285156, "__label__transportation": 0.0005850791931152344, "__label__travel": 0.0002162456512451172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44713, 0.0218]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44713, 0.50068]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44713, 0.84765]], "google_gemma-3-12b-it_contains_pii": [[0, 146, false], [146, 372, null], [372, 3727, null], [3727, 8280, null], [8280, 12657, null], [12657, 16851, null], [16851, 21204, null], [21204, 25117, null], [25117, 29264, null], [29264, 33152, null], [33152, 34480, null], [34480, 35244, null], [35244, 38720, null], [38720, 42659, null], [42659, 44713, null]], "google_gemma-3-12b-it_is_public_document": [[0, 146, true], [146, 372, null], [372, 3727, null], [3727, 8280, null], [8280, 12657, null], [12657, 16851, null], [16851, 21204, null], [21204, 25117, null], [25117, 29264, null], [29264, 33152, null], [33152, 34480, null], [34480, 35244, null], [35244, 38720, null], [38720, 42659, null], [42659, 44713, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44713, null]], "pdf_page_numbers": [[0, 146, 1], [146, 372, 2], [372, 3727, 3], [3727, 8280, 4], [8280, 12657, 5], [12657, 16851, 6], [16851, 21204, 7], [21204, 25117, 8], [25117, 29264, 9], [29264, 33152, 10], [33152, 34480, 11], [34480, 35244, 12], [35244, 38720, 13], [38720, 42659, 14], [42659, 44713, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44713, 0.05689]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
35a1541d53f5f4570e97c8d1c0330d67cd0c5af5
|
[REMOVED]
|
{"Source-Url": "https://cnrs.hal.science/hal-02521149v5/file/main.pdf", "len_cl100k_base": 15638, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 67536, "total-output-tokens": 19419, "length": "2e13", "weborganizer": {"__label__adult": 0.0004360675811767578, "__label__art_design": 0.00048422813415527344, "__label__crime_law": 0.0003952980041503906, "__label__education_jobs": 0.0012521743774414062, "__label__entertainment": 0.00014448165893554688, "__label__fashion_beauty": 0.00023686885833740232, "__label__finance_business": 0.0004551410675048828, "__label__food_dining": 0.0004086494445800781, "__label__games": 0.0014324188232421875, "__label__hardware": 0.00205230712890625, "__label__health": 0.0007009506225585938, "__label__history": 0.0006871223449707031, "__label__home_hobbies": 0.00015687942504882812, "__label__industrial": 0.0008392333984375, "__label__literature": 0.00043654441833496094, "__label__politics": 0.0004429817199707031, "__label__religion": 0.0007038116455078125, "__label__science_tech": 0.2384033203125, "__label__social_life": 0.00013887882232666016, "__label__software": 0.01143646240234375, "__label__software_dev": 0.7373046875, "__label__sports_fitness": 0.0004301071166992187, "__label__transportation": 0.0008525848388671875, "__label__travel": 0.00029206275939941406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 77234, 0.02415]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 77234, 0.63815]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 77234, 0.88272]], "google_gemma-3-12b-it_contains_pii": [[0, 679, false], [679, 5445, null], [5445, 11881, null], [11881, 17898, null], [17898, 21736, null], [21736, 27395, null], [27395, 33352, null], [33352, 37836, null], [37836, 41226, null], [41226, 47272, null], [47272, 50842, null], [50842, 56397, null], [56397, 60370, null], [60370, 60459, null], [60459, 60548, null], [60548, 60643, null], [60643, 64605, null], [64605, 69917, null], [69917, 74687, null], [74687, 77234, null]], "google_gemma-3-12b-it_is_public_document": [[0, 679, true], [679, 5445, null], [5445, 11881, null], [11881, 17898, null], [17898, 21736, null], [21736, 27395, null], [27395, 33352, null], [33352, 37836, null], [37836, 41226, null], [41226, 47272, null], [47272, 50842, null], [50842, 56397, null], [56397, 60370, null], [60370, 60459, null], [60459, 60548, null], [60548, 60643, null], [60643, 64605, null], [64605, 69917, null], [69917, 74687, null], [74687, 77234, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 77234, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 77234, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 77234, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 77234, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 77234, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 77234, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 77234, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 77234, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 77234, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 77234, null]], "pdf_page_numbers": [[0, 679, 1], [679, 5445, 2], [5445, 11881, 3], [11881, 17898, 4], [17898, 21736, 5], [21736, 27395, 6], [27395, 33352, 7], [33352, 37836, 8], [37836, 41226, 9], [41226, 47272, 10], [47272, 50842, 11], [50842, 56397, 12], [56397, 60370, 13], [60370, 60459, 14], [60459, 60548, 15], [60548, 60643, 16], [60643, 64605, 17], [64605, 69917, 18], [69917, 74687, 19], [74687, 77234, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 77234, 0.0436]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
bab59aa49d76080b41c87dd6840903d8476bef73
|
Bootstrapping Software Distributions
Pietro Abate
Univ Paris Diderot, PPS,
UMR 7126, Paris, France
Pietro.Abate@pps.univ-paris-diderot.fr
Johannes Schauer
Jacobs University Bremen,
College Ring 3, MB670,
28759 Bremen
j.schauer@jacobs-university.de
ABSTRACT
New hardware architectures and custom coprocessor extensions are introduced to the market on a regular basis. While it is relatively easy to port a proprietary software stack to a new platform, FOSS distributions face major challenges. Bootstrapping distributions proved to be a yearlong manual process in the past due to a large amount of dependency cycles which had to be broken by hand.
In this paper we propose an heuristic-based algorithm to remove build dependency cycles and to create a build order for automatically bootstrapping a binary based software distribution on a new platform.
1. INTRODUCTION
In recent years, the mobile and embedded device market have driven innovators to produce a large number of new devices and hardware platforms. In order to accelerate the adoption of these new products, major commercial vendors chose to provide application developers with a platform agnostic virtual machine, taking the burden to port the native software stack to new architectures or hardware platforms. As a consequence, the efforts to adapt, to compile, and to fiddle with low-level details of the middleware components such as the kernel or the runtime system, are completely hidden from application developers. This model allows vendors to tightly control the number of dependencies among different software components and to reduce the porting procedure to a routine exercise.
The situation is different for collections of components based on Free and Open Source Software (FOSS). Recent efforts to provide a unified platform for mobile and desktop computing based on FOSS components (like Ubuntu for phones and tablets\(^1\)) reignited the need to develop a model and tools to help distribution designers with the task to port, and natively compile software components to new platforms that is consistent with the constraints imposed by the new hardware.
\(^1\)Work partially performed at IRILL center for Free Software Research and Innovation in Paris, France.
Sources packages can also be seen as software product lines [8] where build dependencies identify specific configuration options to match the requirements of a family of software products. In the same optic, software distributions can be seen as software products customized for a specific device [9]. Bootstrapping a distribution to a new architecture deals with the problem of customizing a product line for a specific architecture and to instantiate a new software product that is consistent with the constraints imposed by the new hardware.
In this paper we present algorithms and tools to help distribution architects with the task of analysing the dependency web associated to the compilation process. First we will provide a formal framework to reason about the bootstrapping problem (Section 2) and in Section 3 we will show few algorithms that we developed to untangle the dependency web. We provide a real application for these algorithms using data from the Debian distribution in Section 4 while we implementation details are presented in Section 5. In Section 6 we discuss related works in this area, and
\(\text{http://www.ubuntu.com/devices} \)
we summarize our future plans and draw out conclusions in Section 7.
1.1 The Bootstrap Process
Bootstrapping a distribution is defined as the process by which software is compiled, assembled into deployable units and installed on a new device/architecture without the aid of any other pre-installed software. Different approaches are possible, such as using virtual environments emulating the target hardware, using a set of binaries often provided by the hardware vendor, or setting up an ad-hoc cross compiling environments on a machine with a different architecture. For example ARM routinely provides an emulator before the release of new hardware to ease its adoption, while cross compilation is mainly used by embedded distributions as the target hardware is often not powerful enough to compile software itself.
We define the architecture of the machine we compile on, the native architecture and the architecture we compile for, the target architecture. When native and target architecture are the same, the process to create a new software package is called native compilation; when they differ it is called cross compilation. The method presented in this paper involves, first the creation (by cross compilation) of a minimal build system, and later the creation of the final set of binary packages on the new device (by native compilation). Cross compiling a small subset of source packages is necessary because an initial minimal set of binary packages must exist in order to compile a larger set of source packages natively.
Once a sufficient number of source packages is cross compiled – we call the set of binary packages produced by them a minimal system – new source packages can be compiled natively on the target system. The minimal system is composed of a coherent set of binary packages that is able to boot on the new hardware and to provide a minimal working system. It contains at the very least an operating system, a user shell and a compiler and it is custom tailored by distribution architects. We developed tools to help them to refine and complete this selection. Even though the problems associated to cross and native compilation are conceptually similar, in this paper we will assume the existence of such minimal system and consider mainly the native compilation phase.
Native compilation entails computing an order in which source packages are compiled and the binary packages they produce are made available on the new system. This order is imposed by the presence of build dependencies, that is, binary packages that must be available on a system in order to create new binary packages from source packages. Source packages without build dependencies, or only with build dependencies that can already be satisfied on the minimal system can be immediately be compiled, and all the resulting binary packages made available. Source packages that require binary packages which are not yet available will be scheduled for later consideration.
In an ideal situation, where the build dependency graph entails a natural topological order, bootstrapping a distribution for a new architecture would not pose any particular challenge. Unfortunately, for larger distributions it is common that the build dependency graphs cannot be topologically sorted because the presence of millions of dependency cycles.
### Figure 1: Bootstrapping Problem Example
In Figure 1 we give a simple artificial example where vertices represent source packages, and edges represent dependencies among source packages. For now, a dependency between two source packages $S_1$, $S_2$ is a connection defined by the fact that the package $S_1$, in order to be compiled, requires a binary package which will be provided by the source package $S_2$ only once $S_2$ is compiled.
In Figure 1(a) we can identify four cycles. The self cycle of vertex $S_5$ is a typical example of a cycle associated with the source package of a programming language that requires itself to compile. Other cycles have length 2, 3 and 4. Even if there are multiple ways of removing dependency cycles, it is not possible to develop an exact algorithm by looking at the packages’ meta-data. Up until now, whenever the distribution is to be bootstrapped, package maintainers inspected source packages by hand to decide which build dependencies are really needed and which are optional. We developed a few heuristics to help maintainers with the task to identify such build dependencies. In Figure 1(b) we are able to obtain a directed acyclic graph (DAG) by removing the dependencies of $S_1$ on $S_5$, $S_1$ on $S_2$ and the self-cycle of $S_5$ with itself. From it we can easily compute a build order where package $S_4$ and $S_5$ are compiled first (possibly in parallel), then package $S_1$, followed by $S_6$ and finally $S_3$ and $S_2$.
Once all source packages are compiled, we can restart the process, but now re-instantiating all the build dependencies previously dropped. In this second compilation stage, all the features that were disabled to remove build cycles, are now enabled once again. Since all binary packages have now been compiled, the build dependencies of all source packages are satisfied and they can therefore be compiled in any order. This recompilation is necessary to build every component with its full feature set.
**Our Contribution.**
By analysing the nature of these cycles and by pin-pointing packages that are more relevant then others we provide heuristics and tools to help developers and distribution editors to reduce the burden associated to the bootstrapping process and to reduce the time to market of a software distribution on a new platform. We present an heuristic-based algorithm to break dependency cycles and we validate our results using the Debian/Gnu Linux distribution as an example.
2. DEFINITIONS
In this section we will introduce a formal framework to model the bootstrap problem. First we will define package repositories and the notion of installability [15]. Then we will explain how mixed repositories, composed of source and binary packages can be used in order to compute a compi-
Depends and let ages to binary packages. If Bin the repository must satisfy the following conditions:
where assume that versions are non-negative integers.
Figure 2: Binary/source package repository
The dependencies of a binary package indicate which packages must be installed together with it, the conflicts which packages must not. The build dependencies of a source package identify binary packages that must be installed in order for the source package to be compiled. Build conflicts are binary packages that must not be installed in order to compile the source package. As by Definition 2.3, the build dependencies of source packages are always binary packages.
Definition 2.4. Let \((P, \text{Dep}, \text{Con}, \text{Bin})\) be a repository. The dependency relation is a binary relation \(\rightarrow: P \times P\) defined as \(p\) depends on a package \(q\) if there exists \(D \in \text{Dep}(p)\) such that \(q \in D\).
This definition can be extended to a multi-step relation: \(q\) is in the dependency closure of a package \(p\) if \(p \rightarrow q\) in one or more steps. We then say \(p \rightarrow^+ q\). In Figure 2, the dependency closure of the package \(S1\) is the set \(\{a, b, c\}\). While packages \(a, b\) are direct dependencies, package \(c\) is a transitive dependency.
Definition 2.5. Let \((P, \text{Dep}, \text{Con}, \text{Bin})\) be a repository \(p \in P\). The dependency closure of \(p\) is a subset of \(P\) defined as
\[ \{q \mid q \in P, p \rightarrow^+ q\} \]
Source packages and binary packages are related. A binary package is built from a source package, and a source package builds a set of binary packages.
Definition 2.6. Let \((P, \text{Dep}, \text{Con}, \text{Bin})\) be a repository. The function \(\text{Src}: P \rightarrow P\) is defined as follows:
\[ \text{Src}(p) = s \text{ such that } p \in \text{Bin}(s) \]
In Figure 2, the source package corresponding to the binary package \(a\) is \(S2\).
Installation Sets.
An installation is a consistent set of packages, that is, a set of packages satisfying abundance (every package in the installation has its dependencies satisfied) and peace (no two packages in the installation are in conflict) [15]. Formally:
Definition 2.7. Let \(R = (P, \text{Dep}, \text{Con}, \text{Bin})\) be a repository. An \(R\)-installation \(I\) is a subset \(I \subseteq P\) such that for every \(p \in I\) the following properties hold:
Abundance. Every package has what it needs: for every $p \in I$, and for every dependency $D \in \text{Dep}(p)$ we have $I \cap D \neq \emptyset$.
**Peace.** No two packages conflict: $(I \times I) \cap \text{Con} = \emptyset$.
We say that a package $p$ is installable in a repository $R$ if there exists an $R$-installation $I$ such that $p \in I$.
**Definition 2.8.** Given a repository $R = (P, \text{Dep}, \text{Con}, \text{Bin})$ and a package $p \in P$. We define the partial function $IS_R(p) = I$ to denote one $R$-installation such that $p \in I$.
We note that in Figure 2, even though the dependency of the package $S2$ contains a disjunction, the installation set of $S2$ is unique, namely $IS(S2) = \{a, d\}$.
**Definition 2.9.** Let $R = (P, \text{Dep}, \text{Con}, \text{Bin})$ be a repository and $B = \{p \mid \text{Bin}(p) = \emptyset\}$. We say that $R$ is self-contained if the following conditions hold:
$\forall b \in B$, $\text{Src}(b) \in S$
$\forall s \in S \exists I \subseteq B$ such that $I \cup \{s\}$ is a $R$-installation
In other words a repository $R$ is self-contained if all binary packages in $R$ are built from the source packages in $R$ and all source packages in $R$ can be built using only binary packages in $R$.
**Lemma 2.10.** Let $R$ be a self-contained repository. Then all source packages in $R$ can be compiled.
**Compilation.**
The notion of $R$-installation applies to all types of packages. Intuitively, for a package $p$ there exists an $R$-installation if all (runtime or build) dependencies can be satisfied. However, in order to characterize the compilation process we need to further introduce the notion of $R$-compilation. Intuitively a source package $s$ can be compiled in a repository $R$ if there exists an $R$-installation $I$ and $s \in I$ and all source packages used to build binary packages in $I$ can also be compiled. The compilation of a package, not only implies that there exists an installation set $I$ for its build dependencies, but also that all source packages associated to all binary packages in $I$ can be also compiled.
First we define the following relation among source packages.
**Definition 2.11.** Let $(P, \text{Dep}, \text{Con}, \text{Bin})$ be a repository. The binary relation $\sim$: $S \times S$ is defined as $s \sim t$ if there exist a installation set $I$ such that $s \in I$ and some $p \in I$, $t = \text{Src}(p)$.
We now extend the one step definition above to an arbitrary number of dependency steps.
**Definition 2.12.** The relation $\sim^*$: $S \times S$ is defined as $s \sim^* t$ if: $s \sim t$ or there exists $s_1, \ldots, s_n$ such that $s \sim s_1$, $s_n \sim t$ and $s_i \sim s_{i+1}$ for $1 \leq i \leq n$.
The relation $\sim^*$ is transitive. We define an $R$-compilation of a source packages $s$ as the closure of $P$ under $\sim^*$.
**Definition 2.13.** Let $R = (P, \text{Dep}, \text{Con}, \text{Bin})$ be a repository and $s$ a source package. An $R$-compilation of $s$ is a set $I \subseteq P$ such that $I = \{t \mid s \sim^* t\}$. We say that a source package $s \in S$ can be compiled if there exists an $R$-compilation $I$ such that $s \in I$.
For the repository in Figure 2, the $R$-compilation set of the package $S1$ is the set $\{S1, S2, S3, S4\}$: to compile the package $S1$ we need the binary packages $a$, $b$ and transitively package $c$. The package $a$ is generated by the source package $S2$, while $\text{Src}(b) = S3$ and $\text{Src}(c) = S4$. Since $S3$ and $S4$ do not have any build dependencies, they do not pose any problem. However, it is easy to notice that the package $S2$ build-depends also on the package $d$ that is in turn generated by the source $S1$ giving an example of a build dependency cycle.
Since an $R$-compilation for a given package is not unique, we identify a unique subset of packages common to all $R$-compilations. This subset is identified restraining Definition 2.11 to strong dependencies [1]. We quickly recall the definition:
**Definition 2.14.** Given a repository $R$, we say that a package $p$ in $R$ strongly depends on a package $q$, written $p \Rightarrow q$, if $p$ is installable in $R$ and every installation of $R$ containing $p$ also contains $q$.
Intuitively, $p$ strongly depends on $q$ with respect to $R$ if it is not possible to install $p$ without also installing $q$. In Figure 2, the package $S2$ strongly depends on the packages $d$ and $a$: the package $a$ is a direct dependency, while the package $d$, even if part of a disjunctive dependency, is the only choice because of the conflict between packages $d$ and $a$.
**Definition 2.15.** Let $R$ be a repository. Two source packages are related $s \simeq t$ if there exist $I = \{q \in R \mid q \Rightarrow s\}$ such that $s \in I$ and for some $p \in I$, $t = \text{Src}(p)$.
In the same way as in Definition 2.12 we can define $\simeq^*$ as the closure of the relation $\simeq$ and finally define the Core $R$-compilation as in Definition 2.13, but w.r.t. the relation $\simeq^*$. The Core $R$-compilation allows us to focus on a unique subset of $R$-compilation sets and to provide a lower limit to the dependencies needed in order to compile a source package.
If an $R$-compilation set exists, then it is a-priori not unique. This problem is due to the fact that package dependencies contain disjunctions. However, because of the specific nature of the problem, in practice, this problem does not arise too often: the number of disjunctions in build dependencies are often very limited. Moreover, using specialized solvers [2], we can use heuristic to minimize the number of packages to consider, possibly reducing the number of build cycles. The restriction to core $R$-compilation sets allow us to provide a lower bound of the problem.
**Source Graph.**
We define the source graph associated to an $R$-compilation set as follows:
**Definition 2.16.** Let $R = (P, \text{Dep}, \text{Con}, \text{Bin})$ be a self-contained repository and $I$ be an $R$-compilation set for a set of source packages $S \subseteq P$. The associated source graph $G = (V, E)$ is defined as:
- $V = I$
- $E = \{(s, t) \mid s, t \in I \text{ and } s \sim t\}$.
Figure 4(a) shows the source graph associated to the repository in Figure 2. In the same way we can define the core
source graph of a core R-compilation set by using the ∼ relationship instead. The core source graph is a proper subgraph of every source graph no matter which R-compilation set is chosen.
Note that the source graph might contain cycles and hence it is not possible to use a topological sort algorithm. In the next section we will provide methods to compute the set of packages that can be compiled in a repository and heuristics to drop optional build dependencies from the source graph and to generate a build order.
When we mention build dependency cycles, then we imply the cycles to be elementary:
**Definition 2.17.** Given a graph (V, E), a path is a sequence of vertices v_1 \cdots v_n such that for each i, 0 \leq i \leq n - 1, (v_i, v_{i+1}) \in E, that is, between every subsequent pair of vertices there is an edge connecting them. If v_1 = v_n then a path is called a cycle. A cycle is called elementary if no vertex (except the first and last) appears more than once in it.
### 2.1 Problem statement
To formally describe the bootstrap problem, we first need to define the tools to relax build dependency constraints of source packages to, in the end, be able to compile all source packages in a given repository. A build profile is a function which transforms repositories to make the source graph acyclic. Formally:
**Definition 2.18.** A build profile Pmap is a function that transform repositories into repositories.
Let R = (P, Dep, Con, Bin) and R_1 = (P, Dep_1, Con_1, Bin_1) such that R_1 = Pmap(R). R_1 must satisfy the follows constraints:
- Bin_1 \subseteq Bin.
- If R is self-contained then R_1 is self-contained.
- Let G and G_1 be the source graphs associated respectively to R and R_1. Then the number of elementary cycles of G_1 is less or equal then the number of cycles in G.
Finally we can give the formal problem statement. The bootstrap problem is defined as a sequence of refinements where, starting from a repository R = (P, Dep, Con, Bin) and a minimal build system B_0 \subseteq P, all source packages in R are compiled throughout multiple iterations, increasing the amount of binary packages B_1 \cdots B_n until all source packages can be compiled and produce all binary packages B in the repository R. Formally:
**Definition 2.19.** Given a repository R = (P, Dep, Con, Bin), and a set of binary package B_0 \subseteq P, the bootstrap problem is defined as a sequence of build profiles such that:
- R_1 = Pmap(R)
- S_1 = Src(\text{Installable}(B_0))
- B_1 = Bin(\text{Installable}(S_1, B_0))
- R_2 = Pmap(R_1)
- S_2 = Src(\text{Installable}(R_1))
- B_2 = Bin(\text{Installable}(S_2, B_1))
\[ \vdots \]
- R_n = Pmap(R_{n-1})
- S_n = Src(R_n)
- B_n = Bin(\text{Installable}(S_{n}, B_{n-1})) = B
and for all i, j R_i \neq R_j.
The function Installable(S, B) computes the set of packages S_i \subseteq S that are compilable in the repository S \cup B as by definition 2.7 of an R-installation.
B_0 \cdots B_n are sets of binary packages at each step i can be used to resolve the dependencies to compile the source package in S_i. The set B_0 is the minimal build system. The last step makes the packages in B_n available which is equal to the set of binary packages B in R.
Since at each iteration the set B_i grows, less and less source packages will require to be modified by changing their build dependencies, finally leading to the original set of source packages S to be compiled. The result of this compilation will be the original set of binary packages B.
### 3. ALGORITHMS
One important application associated with this work is the development of an automatic build procedure, that will be used to compile, test and assemble packages for different architectures. An essential building block toward this goal is the development of heuristics and tools to create an appropriate build order to guide such infrastructure.
To compute such order we first need to arrange packages in a graph and then transform it using ad-hoc heuristics to remove all eventual cycles. The final result will be then obtained by topological ordering the vertices in the resulting graph. In Section 3.3 we will also present two algorithms used to select a coherent subset of a repository based on a user specification.
#### 3.1 Build Dependency Graph

In Definition 2.16 we introduced the source graph as a tool to reason about build dependency cycles. However, for efficiency reasons, in our implementation, we use instead an intermediary data structure that we call the build graph. This data structure embeds more information than a source graph and can be easily converted into a source graph. The build graph will be used as input for the edge removal algorithm in Section 3.2.
Directly using a source graph has the disadvantage of creating one edge for each binary package in the installation set (Figure 4(a)) making the edge removal procedure an expensive operation. The build graph obviates this problem by introducing an intermediary vertex between source vertices.
We call this new vertex kind installation set vertices. We call an edge between a source vertex and an installation set vertex a build-depends edge, while we call an edge between an installation set vertex and a source vertex a builds-from edge. In Figure 4(b), the former is drawn as a dashed line and the latter as a solid line. Rectangles represent source package vertices, ellipses represent installation set vertices.
For example, Figure 4(a) shows the source graph of the repository in Figure 2, where S1 build depends on the binary packages a and b. Besides edges for those two binary packages, the source graph also contains an edge for the binary package c. This is because c happens to be in the installation set of b and is therefore also in the installation set of S1. It is not obvious from this representation how the source graph should be modified if either of the two build dependencies of S1 (a or b) would be dropped. Figure 4(b) shows the corresponding build graph. S1 now no longer has an edge for c but instead, c is part of the installation set of b to which S1 connects. Using this representation, it can immediately be seen how its connection to S2 and S3 would be severed if its dependency b was dropped.
### Algorithm 1 Build Graph Algorithm
1: procedure BUILD_GRAPH(S, B, R)
2: for all $s \in S$ do
3: $I \leftarrow IS(s)$
4: $P \leftarrow PARTITION(I, Dep(s))$
5: for all $i \in P$ do
6: if $i \notin B$ then
7: ADD_EDGE(s, is)
8: for all $b \in i$ do
9: if $b \notin B$ then
10: $t \leftarrow Src(b)$
11: ADD_EDGE(is, t)
Algorithm 1 describes the creation of the build graph. The function BUILD_GRAPH takes as arguments a set of source packages S, a set of binary packages B and a repository R such that $S, B \subseteq R$. The set B holds all the packages that should not be included in the build graph. First, for each source package $s$, we compute an R-installation set $I$ and we partition it into subsets. The installation set $I$ is usually small and it is computed using a specialized sat solver [2]. The set $P$ is a set of sets defined as follows
\[ P = \{ \text{DependencyClosure}(p) \cap I \mid p \in \text{Dep}(s) \} \]
and computed by the function PARTITION. We then add edges from $s$ to all installation set partitions is. If the installation set associated to a build dependency is a subset of $B$ then the subgraph associated to this dependency can be omitted. In the end, we add edges to all source package vertices $t$ that build the binary packages $b$ in the installation set is. The set $B$ contains all those binary packages that are not relevant to the construction of the build graph because they are part of the initial minimal build system or have been cross or natively compiled earlier.
Consider the repository $R$ of Figure 2. Starting from the source package $S1$, the algorithm proceeds as follows: The direct dependencies of $S1$ are $\text{Dep}(S1) = \{a, b\}$ and the R-installation of $S1$ is $\text{IS}(S1) = \{a, b, c\} = I$. The function $\text{PARTITION}(I, \text{Dep}(S1))$ in this case returns $\{\{a\}, \{b, c\}\}$. Two installation set vertices are created containing those two sets respectively. The vertex for $S1$ is connected to each of them using build-depends edges. As the last step, builds-from edges are added to both installation set vertices. They point to the source package vertices that the binary packages $a$, $b$ and $c$ build from which are $S2$, $S3$ and $S4$ respectively. Those three source packages are then processed in the same fashion and the result of the computation can be seen in Figure 4(b).
**Remark 3.1.** The Algorithm 1 is not complete. Since the installation set computed at each step is not unique there exist multiple possibilities for a build graph involving the same initial source packages. By considering edges that correspond to strong dependencies as in Definition 2.14 we can create a sub-graph which only consists of vertices which are present in every possible selection of installation sets.
### Source Graph
The source graph, as defined in Definition 2.16, is computed by path contraction from the build graph over all builds-from edges. As a result of the contractions, all installation sets vertices are removed from the graph (Figure 4(a)). This operation can be done in $O(n + m)$ with $n$ and $m$ being the number of vertices and edges in the build graph respectively.
#### 3.2 Finding Build Profiles
Since the build graph (and thereby the source graph) may contain dependency cycles, in this section we present three heuristics to identify a “minimal” set of source packages that, by relaxing their dependencies, will make the build graph acyclic. We start by identifying all cycles of length 2. Removing all such cycles is a pre-requisite to transform the build graph into a DAG. Then we introduce a simple heuristic based on local vertex characteristics to automatically identify candidate edges to be removed. The last heuristic provides a way to deal with cycles of arbitrary length. These heuristics are meant to provide package maintainers with tools to highlight important packages and recurrent patterns hidden in the build graph.
### Removing 2-cycles
Dependency cycles of length two are most often encountered for source packages of programming languages which need themselves to be compiled (Vala, Python, SML, Free Pascal, Common Lisp, Haskell). In a build graph those are identified by a sequence of one build-depends edge and one builds-from edge in the opposite direction. They contain exactly one source vertex and one installation set vertex. Since there is only one way to break a 2-cycle (only build-depends
edges can be removed), we simply enumerate all 2-cycles to create a list of edges that we must deal with to transform the build graph in a DAG. In Figure 4(b), there exist two 2-cycles between S2 and is(d) and S2 and is(a). The only way to break them is to remove the build dependencies a and d of S2. When a cycle cannot be broken because the dependency is indeed not optional, the solution is to cross compile the source package or the set of source packages which generates the binaries in the installation set. Moreover, we identify all 2-cycle in the core build graph which is constructed considering only strong dependencies (Definition 2.14). This will provide a lower-bound to the 2-cycles that must be removed.
Relaxing dependencies using vertex based heuristics.
We provide two metrics to identify source packages or dependencies that may heavily impact the compilation of another source package. Removing these “heavy dependencies”, often reduce the number of cycles in the graph. The basic reasoning behind both heuristics is that often FOSS software depends on large software packages to borrow a small number of features. By removing these dependencies, the package will be successfully compiled with a reduced set of options without compromising its core functionalities.
Figure 5(a) displays a situation where a heavy dependency will imply 55 more source packages to be compiled. In this case, if evolution can be compiled without the binary package libmx-dev, then its connection to 55 other source packages is severed and the build graph will be considerably simplified.
Figure 5(b) displays a similar situation where one build dependency proxies the connection to a heavy source package that will require a large number of dependencies to be satisfied. In this case, the source package src:dia only has one predecessor installation set of dia which in turn only has one predecessor source package src:tracker. So if src:tracker can be compiled without dia, then src:dia can be removed from the graph together with all its connections to 22 other installation sets.
Another minor heuristic is based on the experience gained during the years by distribution architects. In particular, it is common knowledge that functional packages used to generate documentation or to run unit tests suite are not essential for the functionality of a package. By identifying and removing those dependencies, it is possible to further simplify the build graph.
Cycle based heuristics.
Algorithm 2 Approximate feedback arc set algorithm
1: procedure PartialFAS(C, A)
2: if C = ∅ then
3: return (C, A)
4: else
5: e ← EdgeWithMostCycles(C)
6: C+1 ← C \ CyclesThroughEdge(e)
7: A+1 ← A + \{e\}
8: return PartialFAS(C+1, A+1)
9: procedure RecCycle(G, A, n)
10: if HasCycle(G) then
11: C ← FindCycles(G, n)
12: if C ≠ ∅ then
13: A ← PartialFAS(C, ∅, ∅)
14: G+1 ← RemoveEdges(G, A)
15: A+1 ← A + \{G\}
16: return RecCycle(G+1, A+1, n + 2)
17: else
18: return RecCycle(G, A, n + 2)
19: else
20: return A.
21: findFAS ← RecCycle(G, ∅, n)
The two previous heuristics consider local vertex attributes. Now we present an algorithm to analyse large cycles that will be otherwise very complicated to see by direct inspection.
We use a modified version of Johnson’s algorithm [12] to finds all elementary cycles up to a given length. Johnson’s algorithm has a complexity \(O((r + c) * n)\) where \(n\) are the number of vertices and \(c\) the number of edges. Our algorithm has a similar complexity, but \(n\) is bounded.
We present an approximate solution to the feedback arc set problem. A minimal feedback arc set is the smallest (possibly not unique) set of edges which, when removed from the graph, makes the graph acyclic. Since the feedback arc set problem is NP-hard [13] and our graph contains components with up to a thousand vertices, it is computationally not feasible to identify an optimal solution. The proposed algorithm is based on the consideration that at least one edge from every cycle in the graph must be part of a feedback arc set and that those edges that are common to a large number of cycles are best candidates to be removed. The output is a set of build dependencies which, if dropped, makes the build graph acyclic.
We use a greedy strategy which at first enumerates all elementary cycles up to a specific length and then iteratively tries to remove the edges with most cycles through them. This step is repeated increasing the length of the largest cycles to consider until the graph is cycle free.
Algorithm 2 is composed of two recursive functions and takes as input a strongly connected subgraph \(G\) of the build graph and an initial cycle length \(n\). It returns a set of build-depends edges which, if dropped, turn the strongly connected component into a directed acyclic graph. The function RecCycles first checks if the input graph is cycle free and in this case it returns the feedback arc set. Otherwise it first computes the list of cycles of length \(n\) to be analysed with the function PartialFAS. The latter greedily removes edges that are part of most cycles until all are broken. Once all edges found by this method are removed,
we can then use a standard topological ordering procedure
to extract a build order.
Edge Removal Example.
In Figure 6 we describe the execution of the procedure
used to remove cyclic dependencies from a build graph using
algorithms and heuristics presented so far in this section.
Figure 6(a) shows a strongly connected component
computed from the build graph of the Debian unstable
distribution using the Algorithm 1. Dashed edges are build-
dependencies edges and their width represents the amount of
cycles through them computed during the execution of Algo-
rithm 2. Figure 6(b) shows the result of applying a build
profiles set (Algorithm 2) to the build graph where the edges
with most cycles through them are removed. First all 2-
cycles are removed. Then we enumerate larger cycles (ex.
src:ruby-diff-ics, src:ruby-rspec, src:ruby-rspec-core, ruby-
rspec-expectations, src:ruby-diff-ics) and we apply the
function EdgeWithMostCycles in Algorithm 2. As a result
the build dependency of the source package src:ruby-diff-
ics on the binary package ruby-rspec is severed as well as
other heavy edges. Figure 6(c) shows the associated source
graph obtained by path contraction (Section 3.1) from the
build graph in Figure 6(b). Numbers represent the build
order obtained by sorting the vertices topologically.
Remark 3.2. In Algorithm 2, the ratio between execu-
tion time and quality of the result can be adjusted setting
the initial maximum length of cycles to enumerate. Our re-
sults show that the gain obtained considering cycle lengths
larger the 10 is small compared to the time needed to com-
pute longer cycles.
3.3 Minimal Build Sets
In this section we present two algorithms that are used
to compute a “reduced” distribution that is self-contained
(Definition 2.9) and large enough to be representative of
the entire distribution, but with less noise. In the con-
text of heuristic based algorithms presented in this paper,
it is important to focus only on the core part of the com-
pilation problem. A reduced distribution allows to focus
on a meaningful, small and manageable subset of packages.
Given an initial set of source packages, the following two
algorithms compute respectively the maximum amount of
source packages which can be compiled from an initial set of
binary packages and the minimum amount of source pack-
geges needed to create a self-contained repository. The com-
pilation fix-point procedure is used in the execution pipeline
(Section 5) to identify the largest set of source packages that
is possible to compile from the minimal build system. The
build-closure algorithm is used to select the smaller number
of packages needed to compile a set of binary packages in
order to reduce the number of possible build cycles.
Compilation Fix-Point.
We use this algorithm to identify source packages in S
which can be compiled from a given set of binary packages
M (for example the minimal build system) without relaxing
the build dependencies of any source package. The result of
the algorithm is a tuple (B, C) of binary and source packages
where the set B then can be used in the Algorithm 1 to
exclude packages from the build graph.
Algorithm 3 Compilation Fix Point
1: procedure F(B_i, C_i, S_i)
2: NS ← Installable(S_i, B_i)
3: if NS = ∅ then
4: return (B_i, C_i)
5: else
6: B_{i+1} ← Bin(NS) ∪ B_i
7: C_{i+1} ← C_i ∪ NS
8: S_{i+1} ← S_i \ NS
9: return F(C_{i+1}, B_{i+1}, S_{i+1})
10: Fixpoint ← F(M, ∅, S)
The Algorithm 3 proceeds as follows: first we compute
the set of all source packages that can be compiled in the
given repository composed by the binary packages in B_i. If
such set is empty then we return the set of binary packages
and the set of source packages. Otherwise we create three
new sets, B_{i+1}, C_{i+1} and S_{i+1}, respectively the set of binary
packages built so far, the set of source packages compiled so
far and the set of source packages that are left to compile.
We repeat this function until no more source packages can
be compiled.
Build Closure.
The build closure algorithm is used to compute the min-
imum amount of source packages needed to create a self-
contained repository as in Definition 2.9.
First we compute the union of all installation sets for all
source packages in S: NB = \bigcup_{s \in S} IS(s). If NB is empty
then it means that either S is empty, or none of the packages
in S are installable. If this is not the case we build three
sets, B_{i+1} and C_{i+1} respectively hold the set of all binary
packages built so far, and the set of all source packages com-
Figure 6: Strongly connected component 6(a), a directed acyclic build graph 6(b), a directed acyclic source
graph and a build order 6(c)
compiled so far. $NS$ is the set of source packages that are left to be built, that is all source packages that are needed to build the binary packages in $NB$ minus the source packages already compiled. The procedure is repeated until all binary packages can be built from the list of source packages and all source packages can be built using the binary packages in the repository.
Algorithm 4 Build Closure
1: procedure $F(B_i, C_i, R, S)$
2: $NB ← \bigcup_{i \in S} IS(s)$
3: if $NB = \emptyset$ then
4: return $(B_i, C_i)$
5: else
6: $B_{i+1} ← B_i \cup NB$
7: $C_{i+1} ← C_i \cup S$
8: $NS ← \text{Src}(NB) \setminus C_{i+1}$
9: return $F(B_{i+1}, C_{i+1}, R, NS)$
10: $ Closure ← F(\emptyset, 0, R, S_1)$
4. EXPERIMENTAL VALIDATION
We validate our results using the Debian Sid distribution as of January 2013. To carry out our experiments, instead of using the entire package repository, we selected a self-contained repository by using the build closure algorithm presented in Section 4. The algorithm selected a repository of 613 source packages and 2044 binary packages. This is a representative subset of the full distribution as it contains the base system as well as a number of browsers, window managers, display toolkits and several programming languages like Java, Python and Perl. Because of the structure of the dependency structure of the Debian Sid distribution, considering only a fraction of the packages for our experiments does not change the overall complexity of the problem, but it significantly reduces the background noise produced by thousand of isolated packages and packages with trivial build dependencies.
Assuming the existence of a minimal build system, we analyse the dependency graph to compile all source packages, including those that were part of the minimal build system and were thus cross compiled earlier.
There exist fewer dependency cycles during cross compilation than during native compilation. The bootstrapping problem would therefore be easier if more source packages were cross compiled. In our evaluation we chose to cross compile a few source packages as possible (only the minimal build system) because binary packages in most distributions only support native compilation and adding cross compilation support is harder than breaking dependency cycles. Since distribution architects are free to make the minimal build system as big as possible (even including the whole distribution) our tools are also perfectly fit for evaluating the dependency graph in case more or even all source packages were chosen to be cross compiled.
The build graph created from the selected repository contains only two strongly connected components with 977 and 2 vertices respectively. The larger component contains 36 trivial cycles of length two and millions of larger cycles. Using the heuristic presented in the previous sections we were able to remove all cycles by selecting 58 build dependencies to be removed. The total runtime of our algorithms on a standard desktop machine is less then two minutes.
In order to validate the effectiveness of our heuristics, we manually collected and identified a list of optional build dependencies from different sources. We used manually supplied data from package maintainers and automatically harvested information from the Gentoo Linux distribution. Using this data we were able to verify that 88% of the selected build dependencies can be dropped in practice. Moreover it is important to notice that there are two orders of magnitude more build dependencies that could potentially be dropped. By selecting only a few dozens of build dependencies, our algorithms allow to break all dependency cycles with a close to minimal amount of modifications to existing source packages.
Amongst the 88% of removable build dependencies we identify different classes. For example the source package src:curl can be compiled without openssh-server if a unit test is disabled. The source package src:gnutls26 can be built without gtk-doc-tools if one disables documentation generation. Packages like src:libxslt can drop all their dependencies on Python if they do not build their python bindings. The same holds for build dependencies on other languages like Perl, Ruby or Java. Lastly, some source packages can be built with some of their features removed. For example the source package src:nautilus provides a configuration option to disable a component which removes its dependency on libtracker-sparql-0.14-dev.
It is important to notice that many of the 36 cycles of length 2 can not trivially be broken. Some of those cycles are languages like Python or Vala which need themselves to be built. To bootstrap them, the build system has to be modified to first compile a subset of the whole language which is then used to compile the rest of the language. When, in some circumstances, a cycle of length two can not be broken because the associated build dependencies are not optional, the solution is to cross compile the missing build dependencies.
There is a high correlation between the build dependencies selected by our heuristics with the build dependencies which can be dropped in practice. Our cycle based heuristic selects edges with the most cycles through them. These edges are usually between the highest connected vertices in the source graph. A high degree of connectivity is property of big software package with lots of dependencies or a software package depended upon by many (directly or indirectly). But dependencies on big software packages are mostly optional because the core dependencies of most source packages are small libraries which are easily compilable. Dependencies on big software packages are to generate documentation, run test cases, generate language bindings or generate a functionality specific to that other software package. All of those are usually optional.
If after manual inspection, some of the remaining 12% of selected build dependencies are found to be essential. This additional (negative) information can be fed back into our algorithm to refine our heuristics and produce alternate suggestions to the developer.
Build order.
A topological sorting of the vertices of the source graph results in a linear ordering. According to this order, a bootstrap can be done by compiling one source package after the
other. As many source packages in this linear order do not depend upon each other, they can be compiled in parallel. The partial sorting algorithm we use exploits this fact by selecting packages that are not connected by a ∼ (Definition 2.12) relationship and marking them for parallel compilation.
The resulting build order consists of 63 groups where all source packages within each group all can be compiled in parallel. The amount of source packages which can be compiled at each iteration quickly drops since packages compiled early have the least amount of build dependencies. Source packages listed last in the build order have the highest build dependency requirements. A final recompilation step ensures that all source packages are compiled with all features enabled and all their build dependencies present.
5. EXECUTION PIPELINE
The tools to facilitate build graph analysis and build order creation are designed after the UNIX philosophy. Each tool executes one algorithm, the exchange format between the tools is based on ASCII plain text files and every tool is a filter. Different tasks are carried out by connecting the tools together differently. Execution pipelines for the cross and native bootstrapping phase and to generate self-contained package selections are supplied by shell scripts.
5.1 Toolset
The bootstrap toolkit is mainly written in the OCaml programming language and uses dose3 as foundation library [4]. The tool grep-dctrl is used for a selection of binary packages which should be available in the final system. The tools coinstall and distcheck are part of dose3: the former generates a co-installation set while the latter checks installability of binary packages. The bin2arc and src2bin utilities turn a list of binary packages into the list of source packages they build from and a list of source packages into the list of binary packages they build, respectively. The build_closure and build_fixpoint tools execute the algorithms of the same names, respectively.
5.2 Native Pipeline
Figure 7 shows the pipeline for the native phase. Since the cross phase cannot yet be analyzed due to missing metadata its pipeline is omitted. Solid arrows represent a flow of binary packages. Dotted arrows represent a flow of source packages. Dashed arrows represent textual user input. Rectangular boxes represent filters. There is only one input to the filter, which is the arrow connected to the top of the box. Outgoing arrows from the bottom represent the filtered input. Ingoing arrows to either side are arguments to the filter and control how the filter behaves depending on the algorithm. Oval shapes represent a set of packages. Boxes with rounded corners represent set operations.
6. RELATED WORKS
The scope of this paper crosses multiple domains of software engineering, from compilers, to dependency visualization. To the best of our knowledge this is the first work trying to analyse the inter-dependencies derived from the compilation of source packages.
This paper is related to the work of some of the authors, but different in scope: while in [3] we analysed the evolution of a binary repository and how future upgrades of certain packages can affect the entire system, in this paper we analyse the relationship between binary packages and source packages and how the dependency graph can be leveraged to provide useful information to developers to build a repository of binary packages particularly tailored for a specific hardware architecture.
In the literature we can find ideas that are similar to what we have used in this work. In [14], the authors propose the concept of “shared dependencies”, that is edges that are common among multiple simple cycles (cycles in which each vertex appears only once). This is indeed similar to our intuition to select edges in the build graph that are common to multiple cycles. In this work they propose a new layout algorithm to visualize dependencies cycles. However, we doubt that approach can be used in our context where the scale of the problem is at least one order of magnitude larger.
Research of dependency graphs and how to solve cyclic dependencies between software components has so far mainly been carried out for C++ and Java classes or limited to the packaging model of a programming language. PASTA is an interactive refactoring tool that arranges Java classes into hierarchies. The algorithm used involves the removal of cyclic dependencies on heuristics that are similar to the ones we present in this paper [10]. Similarly Jooj is an eclipse plug-in used to detect and avoid cyclic dependencies during development using a variant of the feedback arc algorithm [17, 16]. In [5], the authors present a heuristic for automatically optimizing inter-package connectivity and removing dependency cycles based on simulated annealing. To this end, their algorithm picks software classes to be moved between packages.
In the context of compilers and programming languages, the bootstrapping problem has been well studied. A. Appel provides an axiomatization of the bootstrap problem in the context of compilers [6]. In [7], Chambers et al discuss the pitfalls related to software recompilation. While these works are related to the idea of building binary components from source components and dealing with dependency cycles the scale and focus is different from FOSS distributions making difficult a direct reuse of their methodologies.
7. CONCLUSIONS AND FUTURE WORK
In this paper we presented a framework to analyse the bootstrap problem. We provided a set of heuristics to aid distribution architects with the daunting task of porting thousands of software packages to a new architecture in a semi-automatic way. The task of finding droppable build dependencies can never be fully automatic as it requires to actively change a software’s source code. But once enough source packages are annotated with droppable build dependencies, the task of bootstrapping a distribution will be reduced from a year long manual effort to just the time it takes to automatically recompile every single source package by a build order that can be automatically computed in a matter of minutes. Our experience on the Debian distribution, one of the largest FOSS distributions available, provides a good experimental validation workbench for our hypothesis. As dose3 also supports rpm based distributions, adoption of our tools to support those should not pose significant problems.
We have yet to test the algorithms in a real world bootstrapping setup. The collaboration with the Debian community has been extremely fruitful but still work has to be done to develop an automated bootstrapping tool. For the future we also plan to analyse other FOSS distributions and to extend our approach to other component repositories.
We also want to evaluate other heuristics for example using the concept of strong bridges and strong articulation points in the build graph [11]. Experimental data suggests that strong bridges exist. Their removal would allow to significantly reduce the problem size. This is especially important during the cross phase because the amount of cross compiled packages has to be kept as minimal as possible. We will further investigate if vertex properties like its centrality or shortest distance to a given vertex cluster can be used as helpful heuristics to the developer.
Availability.
All tools developed for this work are available as free and open source software and can be downloaded from the repository http://gitorious.org/debian-bootstrap/bootstrap. The results can be validated by running the native.sh shell script in the root of the project’s source code repository.
Acknowledgements.
The authors are very grateful to many people for interesting discussions: all the members of the Mancoosi team at University Paris Diderot, Wookey from linaro.org and the Debian community.
8. REFERENCES
|
{"Source-Url": "https://robotik.informatik.uni-wuerzburg.de/telematics/download/cbse2013.pdf", "len_cl100k_base": 12070, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 43587, "total-output-tokens": 13852, "length": "2e13", "weborganizer": {"__label__adult": 0.0002353191375732422, "__label__art_design": 0.0002765655517578125, "__label__crime_law": 0.00019299983978271484, "__label__education_jobs": 0.0004494190216064453, "__label__entertainment": 4.3272972106933594e-05, "__label__fashion_beauty": 0.00010496377944946288, "__label__finance_business": 0.00018358230590820312, "__label__food_dining": 0.0002187490463256836, "__label__games": 0.0004954338073730469, "__label__hardware": 0.0006422996520996094, "__label__health": 0.0002137422561645508, "__label__history": 0.00015544891357421875, "__label__home_hobbies": 7.331371307373047e-05, "__label__industrial": 0.0002219676971435547, "__label__literature": 0.00015866756439208984, "__label__politics": 0.00014269351959228516, "__label__religion": 0.0002366304397583008, "__label__science_tech": 0.009033203125, "__label__social_life": 5.459785461425781e-05, "__label__software": 0.00908660888671875, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.000156402587890625, "__label__transportation": 0.0002636909484863281, "__label__travel": 0.00013494491577148438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56092, 0.02848]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56092, 0.687]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56092, 0.88978]], "google_gemma-3-12b-it_contains_pii": [[0, 3422, false], [3422, 9555, null], [9555, 11988, null], [11988, 18314, null], [18314, 22677, null], [22677, 29154, null], [29154, 34333, null], [34333, 39087, null], [39087, 45433, null], [45433, 50362, null], [50362, 56092, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3422, true], [3422, 9555, null], [9555, 11988, null], [11988, 18314, null], [18314, 22677, null], [22677, 29154, null], [29154, 34333, null], [34333, 39087, null], [39087, 45433, null], [45433, 50362, null], [50362, 56092, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56092, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56092, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56092, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56092, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56092, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56092, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56092, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56092, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56092, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56092, null]], "pdf_page_numbers": [[0, 3422, 1], [3422, 9555, 2], [9555, 11988, 3], [11988, 18314, 4], [18314, 22677, 5], [22677, 29154, 6], [29154, 34333, 7], [34333, 39087, 8], [39087, 45433, 9], [45433, 50362, 10], [50362, 56092, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56092, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
31a93c8650ffba7deb06002104c0af1373b3e29e
|
Software Analytics in Continuous Delivery: A Case Study on Success Factors
Huijgens, Hennie; Spadini, Davide; Stevens, Dick; Visser, Niels; van Deursen, Arie
DOI
https://doi.org/10.1145/3239235.3240505
Publication date
2018
Document Version
Peer reviewed version
Published in
12th International Symposium on Empirical Software Engineering and Measurement (ESEM 2018)
Citation (APA)
Important note
To cite this publication, please use the final published version (if applicable). Please check the document version above.
Software Analytics in Continuous Delivery: A Case Study on Success Factors
Hennie Huijgens, Davide Spadini, Dick Stevens
Niels Visser, Arie van Deursen
Report TUD-SERG-2018-02
ABSTRACT
Background: During the period of one year, ING developed an approach for software analytics within an environment of a large number of software engineering teams working in a Continuous Delivery as a Service setting. Goal: Our objective is to examine what factors helped and hindered the implementation of software analytics in such an environment, in order to improve future software analytics activities. Method: We analyzed artifacts delivered by the software analytics project, and performed semi-structured interviews with 15 stakeholders. Results: We identified 16 factors that helped the implementation of software analytics, and 20 factors that hindered the project. Conclusions: Upfront defining and communicating the aims, standardization of data at an early stage, build efficient visualizations, and an empirical approach help companies to improve software analytics projects.
CCS CONCEPTS
- Software and its engineering → Empirical software validation;
KEYWORDS
Software Economics, Software Analytics, DevOps, Continuous Delivery, Experience Report, ING
ACM Reference Format:
1 INTRODUCTION
Software analytics is a well-known practice that uses analysis, data, and systematic reasoning for decision making on software data for managers and software engineers. It aims to empower software development individuals and teams to gain and share insight from their data, to make better decisions [21] [3]. Software engineering lends itself well to benefit from analytics: it is data rich, labor intensive, time dependent, dependent on consistency and control, dependent on distributed decision-making, and has a low average success rate [3]. Although much research has been conducted on software analytics, little work has covered its implementation in practice, and even less in a continuous delivery setting [25] [28].
This paper reports on the experience of deploying software analytics into a continuous delivery process at a bank. We conducted structured interviews with 15 project stakeholders in many roles, on five topics: 1) goals of the analytics project; 2) getting data; 3) analyzing data; 4) visualization; and 5) collaboration with researchers. For each topic, the interviews gathered Likert-scale data about what was perceived as positive or negative, with open-ended questions to discover why. By coding transcriptions of the open-ended responses, we report on factors that help and hinder each of the topics.
1.1 Background and Terminology
ING is a large Netherlands-based bank, operating worldwide, with a strong focus on technology and software engineering. The bank is in the midst of a technology shift from a pure finance-oriented to an engineering-driven company. In recent years, ING has implemented a fully automated release engineering pipeline for its software engineering activities in more than 400 teams, which perform more than 2500 deployments to production each month on more than 750 different applications.
This release engineering pipeline - based on the model described by Humble and Farley [7] - is within ING known as CDaaS, an abbreviation of Continuous Delivery as a Service. Its main goal is to support teams in maximizing the benefits of shared use of tools. The pipeline fully automates the software release process for software applications. It contains several specialized tools which are connected to each other, such as ServiceNow (backlog management), GitLab (collaboration), Jenkins (code), SonarQube (code inspection), OWasp (security), Artifactory (build), and Nolio (deploy).
The mindset behind CDaaS is to move to production as fast as possible, while maintaining or improving quality. CDaaS is at the core of a transition that is ongoing within ING towards BizDevOps, a model were software developers, business staff, and operations
Software Analytics in Continuous Delivery: A Case Study on Success Factors
Hennie Huijgens
ING Tech Research
Amsterdam, The Netherlands
hennie.huijgens@ing.com
Davide Spadini
Delft University of Technology
Delft, The Netherlands
d.spadini@tudelft.nl
Dick Stevens
ING
Amsterdam, The Netherlands
dick.stevens@ing.com
Niels Visser
ING
Amsterdam, The Netherlands
niels.visser@ing.com
Arie van Deursen
Delft University of Technology
Delft, The Netherlands
arie.vandeursen@tudelft.nl
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
ESEM ’18, October 11–12, 2018, Oulu, Finland
© 2018 Association for Computing Machinery.
ACM ISBN 978-1-4503-5823-1/18/10...
$15.00
https://doi.org/10.1145/3239235.3240505
1.2 The GAMe-project
In the midst of implementing these CD-pipelines and the new way of working in a large number of its global teams, ING started a software analytics project, called GAMe, an abbreviation of Global Agile Metrics. At the time of conducting this study, the GAMe-project has run for approximately one year. A team of software engineers with a data warehouse, software analytics, or business intelligence background implemented an infrastructure that was based on continuous automated mining of log-files of tools within the software engineering pipeline. Log data was analyzed based on multiple regression analysis, resulting in so-called strong metrics: metrics that are highly correlated with (and hence have strong predictive power for) future software deliveries.
Three of the five authors of the present study were actively involved in the GAMe-project, the first author as research advisor, the third as project manager, and the fourth as business consultant. In this study we built on earlier work on the GAMe-project [13], in which we focused specifically on the analysis of strong metrics, whereas this paper provides insights in the implementation aspects of such a project itself.
1.3 Problem Statement
Where financial organizations, such as banks, traditionally focused their research activities on financial-, risk- and economic-oriented aspects, nowadays a new focus on technological issues is gaining more and more attention [7][10]. With this technology shift, new horizons towards analytics open up. We assume that - e.g. due to a trend towards automation - also research topics such as software analytics are strongly influenced by contemporary technological developments such as pipeline-automation, continuous delivery, and shorter iterations. With this in mind, we address the following research question:
RQ1: What factors affected the experience of implementing the GAMe-project’s software analytics solution within ING’s continuous delivery squads?
RQ2: What can be done to improve future implementations of software analytics solutions in such environments?
Better understanding of the implementation of a software analytics solution - where specific research capabilities are closely linked with the daily practice of software engineering - will help both researchers and practitioners to optimize their collaboration in future software analytics projects. The GAMe-project also serves as a starting point for a longer term collaboration between ING and the Delft University of Technology. For that reason we use the context of this project to address a third research question:
RQ3: To what extent can collaboration of practitioners and researchers help to improve future software analytics projects?
The remainder of this paper is structured as follows. In Section 2 related work is described. Section 3 outlines the research design. The results of the study are described in Section 4. We discuss the results in Section 5, and finally, in Section 6 we draw conclusions and outline future work.
2 RELATED WORK
Three challenges apply regarding software analytics in practice [28] [21]: 1) How to know that analysis output is insightful and actionable? 2) How to know that you use the right data? 3) How to evaluate analysis techniques in real-world settings?
Buse and Zimmermann [3] point out a model for analytical questions, adapted from [5]. The model distinguishes between information – which can be directly measured – and insight – which arises from analysis and provides a basis for action.
A large scale survey with 793 professional data scientists at Microsoft [16] reveals 9 distinct clusters of data scientists, and their corresponding characteristics. A small number of recent, other studies on how data scientists work in practice has been performed, such as Fisher et al. [9] and Kandel et al. [15]. Although both studies where relatively small and with a more general focus, they revealed equal challenges as found in the Microsoft study. Related to earlier mentioned challenges regarding data quality, Agrawal and Menzies [2] found that for software analytic tasks like defect prediction, data pre-processing can be more important than classifier choice, while ranking studies are incomplete without such pre-processing.
Collaboration of researchers and practitioners in software analytics is a slowly emerging topic in software engineering research. As a result, most studies are relatively small and exploratory in nature (e.g. [11] [8] [19] [23] [27]).
A vast number of studies can be found about continuous delivery - sometimes described as release engineering. Although software analytics is mentioned in many of these, a focus dedicated to the implementation aspects of software analytics in such an environment is lacking in most of them. However, one aspect that is mentioned in every study, is the power of automation for analytics purposes.
Waller et al. [26] examine for example how automated performance benchmarks may be included into continuous integration. Adams et al. [1] emphasize the fact that every product release must meet an expected level of quality, and release processes undergo continual fine tuning. They argue that release engineering is not taught in class at universities; the approaches are quite diverse in nature and scope - an indication that the same goes for software analytics in such an environment?
Others, such as Matilla et al. [20] inventory analytics to visualize the continuous delivery process maturity itself. Chen [4] describes six strategies to help overcome the adoption challenges of continuous delivery.
Misirli et al. [22] mention three common themes for software analytics projects they examined: 1) increase the model output’s information content with, for example, defect-severity or defect-type prediction, defect location, and phase- or requirement-level effort estimation; 2) provide tool support to collect accurate and complete data; and, 3) integrate prediction models into existing systems, for example, by combining the results of defect prediction with test interfaces to decide which interfaces to test first, or creating a plug-in that seamlessly works in development and testing environments.
To emphasize the fact that not much has been written about software analytics in a continuous delivery context; Laukkanen et al. [18] identified a total of 40 problems, 28 causal relationships and 29 solutions related to adoption of continuous delivery, yet, software analytics is not one of them; the phrase is not even mentioned in the study. Fitzgerald and Stol [10] describe a roadmap and an agenda for continuous software engineering. Again, also in this study software analytics is not mentioned once.
Therefore, the innovation of our study is, that we perform a case study with quantitative and qualitative data in a large continuous delivery setting, with a focus specific on the implementation aspects of software analytics.
3 RESEARCH DESIGN
Our study methodology involved inventory of artifacts from the GAMe-project, and a series of semi-structured interviews, that included a number of quantitative survey questions. All study materials can be found in our technical report [14].
3.1 Artifacts from the GAMe-project
To get an overview of the results that were delivered within the scope of the GAMe-project, we collected and analyzed artifacts such as overview of collected data, structured metrics, results of data analysis, dashboards, and other visualizations.
3.2 Interviews with Stakeholders
To examine underlying reasons and causes of aspects that helped or hindered the GAMe-project, we opted for semi-structured interviews with its stakeholders [6] [24]. The interviews were performed in an open way, allowing new ideas to be brought up during the interview as a result of what the interviewee says.
Discussion upfront among the group of authors who participated in the project, taught us that the stakeholders of the GAMe-project - based on their main task in the project - could roughly be mapped on the three technology pillars in the Microsoft Research Model for Software Analytics: 1) large-scale computing, 2) analysis algorithms, and 3) information visualization [28]. On top of that we added two topics to the interviews. The first one covers the aims of the project as experienced by its stakeholders. The second topic addresses collaboration between scientists and practitioners from industry in the context of the GAMe-project. Based on these assumptions, we grouped interview questions into five topics, that we used as a framework of themes to be explored:
1. Purpose and aim of the GAMe-project.
2. Large-scale Computing: Getting the Data.
4. Information Visualization and action-ability of dashboards.
5. Research collaboration.
To reduce the risk of missing important topics we ended each interview with an open question about remaining issues to address.
Survey-questions as part of the interviews. Each of the five interview topics was preceded by two or three survey-questions that ask to what extent the interviewee agrees with a statement, followed by a statement, and a 1 to 5 point Likert-scale (strongly disagree - disagree - neutral - agree - strongly agree - don’t know).
Each survey-question was followed by a question ‘Can you please explain the choice made to us?’ Subsequently, two questions were asked on top-3 aspects that helped with regard to the topic, and top-3 barriers that hindered with regard to the topic. See the technical report [14] for a detailed overview of the interview questions.
Selection of participants. We identified interviewees from the target population of people that collaborated in the past year in the GAMe-project, either directly in the project itself, or as a business customer of the project within the ING organization. Table 1 gives an overview of interviewed stakeholders, their role, and organizational unit. Each interview lasted 30 to 60 minutes. The interviews were semi-structured and contained the same set of questions for each interviewee. In total, 15 interviews were conducted.
The list of interviewees was narrowed down to 15 by selecting only stakeholders that were personally involved in the GAMe-project, and still working within ING. Interviewees did work in different teams, some were working in the same team. Interviews were conducted orally in person. Both the first and the second author participated in the interviews, alternately one of them fulfilling the role of main interviewer. The first author did know many of the interviewees because he also was involved in the GAMe-project. The second author did not know any of the interviewees. See Table 1 for information on the part of the organization interviewees came from. An overview of all interview questions - including the survey questions - can be found in the technical report [14].
Analysis of the interview results. We computed the standard deviation for each question, based on a 1-5 Likert scale. Subsequently we calculated indicators in order to interpret the results of the survey (see Figure 4):
1. Percent Agree or Top-2-Box; the percentage respondents that agreed or strongly agreed.
2. Top-Box: the percentage respondents that strongly agreed.
3. Net-Top-2-Box; the percentage respondents that chose the bottom 2 responses subtracted from percentage respondents that chose the top 2 responses.
<table>
<thead>
<tr>
<th>ID</th>
<th>Role</th>
<th>Organization</th>
</tr>
</thead>
<tbody>
<tr>
<td>P01</td>
<td>Product Owner</td>
<td>CDAas Squad</td>
</tr>
<tr>
<td>P02</td>
<td>Pipeline / Software Engineer</td>
<td>CDAas Squad</td>
</tr>
<tr>
<td>P03</td>
<td>Team Lead Engineering</td>
<td>CDAas Squad</td>
</tr>
<tr>
<td>P04</td>
<td>Data Warehouse Engineer</td>
<td>Data Warehouse Squad</td>
</tr>
<tr>
<td>P05</td>
<td>Product Owner</td>
<td>Infrastructure Squad</td>
</tr>
<tr>
<td>P06</td>
<td>GAMe Project Manager</td>
<td>ING Tech</td>
</tr>
<tr>
<td>P07</td>
<td>Data Analyst / R-programmer</td>
<td>ING Tech</td>
</tr>
<tr>
<td>P08</td>
<td>Information Manager</td>
<td>Dashboard Squad</td>
</tr>
<tr>
<td>P09</td>
<td>Business Consultant</td>
<td>Dashboard Squad</td>
</tr>
<tr>
<td>P10</td>
<td>Dashboard Engineer</td>
<td>Dashboard Squad</td>
</tr>
<tr>
<td>P11</td>
<td>Business Sponsor</td>
<td>ING Tech</td>
</tr>
<tr>
<td>P12</td>
<td>Business User</td>
<td>Development Squad</td>
</tr>
<tr>
<td>P13</td>
<td>Data Warehouse Engineer</td>
<td>Data Warehouse Squad</td>
</tr>
<tr>
<td>P14</td>
<td>Agile Coach</td>
<td>ING Tech</td>
</tr>
<tr>
<td>P15</td>
<td>Junior Researcher</td>
<td>ING Tech</td>
</tr>
</tbody>
</table>
(4) **Coefficient of Variation (CV)**, also known as relative standard deviation; the standard deviation divided by the mean. Higher CV-values indicate higher variability.
Where the first three are measures of the central tendency, CV is a measure of variability; we used it in addition to the other approaches. In order to examine whether the free format text resulting from the survey confirmed observations from the survey analysis, we coded the free text from the interviews. We used a transcription service to transcribe the audio, then coded the interviews using the R-package RQDA [12]. Coding was set up by the first author, who also was involved in the interviews, and subsequently checked by the second author who performed as interviewer too.
**Limitations regarding the interview and survey design.** We realize that because three of the authors - the ones working at ING - were involved in the GAMe-project itself, some bias might be introduced. We have tried to prevent this as much as possible by performing the interviews with two interviewers, in this case the first and the second author of this paper. Contrary to the first author, the second author was not involved in the GAMe-project in any way, and therefore did not have any prior knowledge of the project itself. To overcome weakness or intrinsic biases due to problems that come from single method, single-observer and single-theory studies, we applied triangulation by combining multiple interviewers and performing the coding and analysis process with multiple authors. Also, we tried to avoid affecting the design of the interview and the coding of responses by including independent authors in this process too.
A remark is in place regarding the fact that not every interviewee was knowledgeable about every topic. As a consequence, some of the interview data may be based on partial or less relevant knowledge. We tried to mitigate this by emphasizing towards interviewees that not all topics needed to be answered in the interview. Answering “don’t know” was a valid option in the survey questions.
4 RESEARCH RESULTS
In this section, we report results on the interview topics and the artifacts delivered in the GAMe-project.
4.1 Inventory of the GAMe-projects’ Artifacts
In the following subsections, we provide an overview of the artifacts delivered in the GAMe-project.
**Data collection and data cleaning.** From the start of the GAMe-project data from the source systems was recorded as structured metrics in a repository. The GAMe-project used a pragmatic approach to determine what metrics were in scope. In line with the Principles for Software Analytics as mentioned by Menzies and Zimmermann[21] we opted for a practical approach, and to ‘live with the data we have’.
Based on the availability of data a series of queries was built, to transform data emerging in the continuous delivery pipeline into structured metrics. Metrics were defined and collected from two data sources: ServiceNow, the backlog management tool that was used by the squads, and Nolio. A complete overview of the built queries and associated metrics is included in our previous paper on the GAMe-project [13].
Three metrics were assessed as lagging: (1) **planned stories completion ratio**; the number of planned stories that were completed in a sprint divided by the number of planned stories, (2) **planned points completion ratio**; the number of completed planned story points divided by the number of planned story points, and (3) **cycle-time**; the mean time from first test deployment after last production has been done until the next production deployment for all applications of a squad. The choice for these three lagging metrics was driven by the assumption that they are typically output related and cannot easily be planned upfront.
Beside that a number of so-called leading metrics - usually input oriented and easy to influence metrics; they give a signal before the trend or reversal occurs - was assessed; see our previous paper on the GAMe-project [13] for an overview of all metrics in scope. All metrics were structured in a dimensional model - the so-called IAC data warehouse - and related to conformed dimensions so metrics are comparable within date/time and organization structure.
**Leading Lagging Matrix to identify strong metrics.** As described in detail in our previous paper [13], descriptive statistics were examined based on the subset of GAMe project metrics. To understand relationships between metrics, and to identify strong metrics - metrics with strong predictive power - both linear regression and pairwise correlation were performed. For visualization purposes a correlation matrix - the so-called Leading Lagging Matrix - was prepared that plots positive and negative correlations between all individual metrics. Besides the set of lagging metrics a reference set of five other metrics was plotted on the x-axis of the matrix, although these were not assumed to be lagging. A figure depicting the Leading Lagging Matrix is not included in this paper, but can be found in our previous paper [13] and in the technical report [14]. The analysis led to three implications for squads:
1. Squads can improve **planned stories completion ratio** and reduce **cycle-time** by slicing deliverables into smaller user stories.
2. Squads can reduce **cycle-time** by keeping open space in their sprint planning (e.g. increasing remaining time ratio).
3. Squads can increase **planned stories completion ratio** (delivery predictability) by reducing unexpected unplanned work, for example by improving the quality of systems to reduce incidents that lead to last-minute bug fixes.
**Information Visualization.** Based on these implications a dashboard was developed. The GAMe-dashboard consists of a series of visualizations that focus on a specific squad; see for example the **Squad Onepager** in Figure 1. The Squad Onepager gives squads a summary-view on the status of the most important squad metrics. In the example **sprint completion ratio** of the last sprints, **average cycle time** of the last sprints, and **average cycle time kanban** of the last weeks are depicted.
Another example of a visualization that has been developed within the scope of the GAMe-dashboard is a graph of the **number of squad members in sprint points breakdown**, as depicted in Figure 2. In the graph, which splits the number of completed points into three groups, the number of squad members has also been added...
during the start of the sprint. The planned points not completed are shown in a separate chart, see an example in Figure 3.
IBM Cognos Analytics as BI tool. The GAMe-dashboard has been built in the business intelligence (BI) tool IBM Cognos Analytics. Users need to login with a dedicated userid and password to make use of the dashboard, even when they are already logged in the ING work environment. To monitor the use of the GAMe-dashboard a very limited set of usage metrics were recorded. Some aggregated numbers are that during a period of four months following the implementation of the dashboard 237 unique users logged in at it at least once, and in total 660 times. However, these figures do not provide a clear insight into the extent to which these users actually made use of the information on the dashboard.
4.2 Interview Results
In the following subsections we provide an inventory of results from the interviews with GAMe stakeholders, grouped by the five topics from the interview questions framework. We have anonymized parts of quotes to maintain interviewees’ privacy. When quoting survey respondents, we refer to the individual contributor using a [Pxx] notation, where xx is the stakeholder’s ID (see Table 1).
No consensus on purpose and aims of the GAMe-project. Each interview started with four questions regarding the purpose and the aims of the GAMe-project. A first observation is that there is no real consensus among the interviewees about the aims and goals of the GAMe-project. The first interview question deals with the purpose and aims of the project, and whether these were completely clear to its stakeholders. The respondents were split: as many interviewees agreed that the aims were clear as there were who disagreed (see Figure 4). A similar outcome - with even larger differences between the interviewees - is found in the answers to the interview question whether the project achieved its aims and goals. The outcomes of the two interview questions on top-3 aspects that helped and hindered the project in achieving its goals are summarized in Table 2, where the number of interviewees mentioning an aspect is included between brackets.
Management attention is important. Management attention is mentioned by many interviewees as an important help for achieving the aims of the GAMe-project; “Management attention, for sure. We now have a weekly meeting with the executives, where we focus [...] on improve the lead time of epics” [P03]. This statement did refer to management commitment as such, and also to an increasing interest in the outcomes of the project: “I think that what also helped, and this was a little bit later in the project to the end, that there were
Figure 4: Overview of the outcomes of the survey questions within the interviews.
Table 2: Codes related to Aims and Goals
<table>
<thead>
<tr>
<th>Helped to achieve goals</th>
<th>Hindered to achieve goals</th>
</tr>
</thead>
<tbody>
<tr>
<td>Management attention (8)</td>
<td>Lack of time and priorities (5)</td>
</tr>
<tr>
<td>Different stakeholders involved (multi-disciplinary) (5)</td>
<td>Customers do not see added value (5)</td>
</tr>
<tr>
<td>Focus on cycle time reduction (5)</td>
<td>Project was not a joint effort (3)</td>
</tr>
<tr>
<td>Frequent (weekly) team meetings (3)</td>
<td>Focus on performance instead on innovation (3)</td>
</tr>
</tbody>
</table>
Lack of time and priorities. This latter statement matches an often mentioned obstacle that hindered achieving the aims of the GAMe-project: a lack of time and priorities. As P02 states it: "What I noticed, at least with the product owners, was that they were busy to perform, and had no room to innovate" [P02]. An aspect that relates to this is a focus on performance instead on innovation, which was mentioned by a number of interviewees.
Many different stakeholders involved. Another mentioned aspect that helped to achieve the goals of the GAMe-project, was the fact that different stakeholders were involved. The multi-disciplinary character of the project was praised by many: "To be confronted with other ideas, which can help with a better result. That is important and should be applied at every squad" [P02]. Furthermore, a number of interviewees mentioned the clear focus of the GAMe-project on cycle time reduction as helpful.
Yet, at the same time not all stakeholders were convinced that the project really did change things in the squads: "I did not see the impact. It did not feel as the set of metrics that we should use. We maybe have done this too much in a waterfall kind of way" [P11]. The project was not seen as a joint effort by some: "It was a group of consultants looking from a distance, saying how it works" [P01]. Some emphasized the fact that squads do not all work in the same way, and even that some stakeholders did not have too much trust in the approach that was chosen for the GAMe-project.
No consensus on getting and preparing the data. The second topic in each interview focused on the aspects of getting the data. The first question dealt with whether the data that was used within the project was of good quality. A vast majority of stakeholders agreed with this or was at least neutral. A second question was...
about how easy it was to get the data for the project. Consensus among the interviewees on this statement was very low, with an emphasis on stakeholders not agreeing.
The third question dealt with how easy it was to prepare the data for further use within the GAMe-project (e.g., combining data from different sources, shaping of the data). Again, consensus between interviewees is low - indicated by a Coefficient of Variance score of 41%, with an emphasis on stakeholders not agreeing. The outcomes of the two interview questions on aspects that helped or hindered with regard to getting and preparing the data are summarized in Table 3.
### Table 3: Codes related to Getting the Data
<table>
<thead>
<tr>
<th>Helped getting and preparing the data</th>
<th>Hindered getting and preparing the data</th>
</tr>
</thead>
<tbody>
<tr>
<td>I4C Data warehouse as a solution (9)</td>
<td>Difficulties with availability of data (11)</td>
</tr>
<tr>
<td>ServiceNow data was of good quality (7)</td>
<td>Lack of standardization (10)</td>
</tr>
</tbody>
</table>
**I4C data warehouse was appreciated.** Many stakeholders mentioned the availability of the I4C data warehouse solution as a big pro for the GAMe-project. However, the data warehouse had two sides; it both helped structuring the data, but at the same time it created a backlog of queries to be developed that sometimes slowed down the project a bit. Overall as the strengths of the I4C data warehouse was mentioned that it is a future-proof solution in which the data is recorded with structured metrics that are ready for further use, with a frequent automated feed.
**Good data quality, but not for all parts of the project.** Asked about the quality of the data, many interviewees replied that the ServiceNow data was of good quality. Regarding the other data - in particular the data from CDaaS - many interviewees mentioned difficulties in getting the data, sometimes even a kind of silo-behavior in the teams to cooperate with the GAMe-project: "Getting the data out of the systems... this is really a barrier [...] It is an organizational and technical thing. Sometimes it is technical, because people say 'you cannot have my data, because if you do your queries then the systems will break'. Or otherwise 'no, it is my data, you cannot have it', we have seen that also" [P05].
**Lack of standardization in the CD-tools.** Lack of standardization of the data in the different tools within the continuous delivery pipeline is mentioned by many interviewees as a cause for problems in the squads: "Standardized tooling is a big thing. They want to standardize tooling, but that is a long way. That is why we have a problem at this thing, Everybody is testing in its own way. With their own tooling and that kind of things" [P06]. It did slow down the GAMe-project, and even was mentioned as a cause for not achieving its goals in the end: "For some parts of the goals of the Game Project the data was very good. Mainly the backlog management side, the Incident Management side. But for the software delivery, following the code, it is very bad quality of data" [P09].
**Scale was not an issue.** The first interview question within this topic dealt with whether analyzing the data within the project scale (e.g., size of the data) caused problems. Although the number of interviewees that answered this question was quite low, all agreed that scale did not cause problems.
**R was a help, but also a small obstacle.** The second question, about whether analyzing the data and machine learning (e.g., building predictive models) caused problems, was answered by even less interviewees. Most of them agreed on the statement by referring to the open source tool R that was used for statistics. However, one disagreed: "The difficulty may be that there were sometimes unexpected results out of the data analysis. And then we miss the statistical knowledge to understand why" [P06]. Apparently R was experienced as a good tool for statistics, but at the same time experienced as a slight bottleneck due to its somewhat steep learning curve and the fact that not enough statistical knowledge was available in the team.
The results of the two interview questions on aspects that helped and hindered the project with regard to getting and preparing the data are summarized in Table 4.
### Table 4: Codes related to Analyzing the Data
<table>
<thead>
<tr>
<th>Helped analyzing the data</th>
<th>Hindered analyzing the data</th>
</tr>
</thead>
<tbody>
<tr>
<td>Use of R for analyzing (5)</td>
<td>Use of R for analyzing (2)</td>
</tr>
<tr>
<td>Collaboration with academia (1)</td>
<td>Lack of statistical knowledge (2)</td>
</tr>
</tbody>
</table>
For data that was not available, some stakeholders assume that analyzing unstructured data could help: "I think for example that in all sorts of unstructured descriptions of user stories, it must be possible to find a structure in this with machine learning" [P12]. Stakeholders do believe in prediction techniques: "With machine learning we can get very far I think" [P14].
**The dashboard contained the right metrics.** The fourth topic in each interview focused on the more or less specific aspects of information visualization. The first question within this topic was about whether the GAMe-dashboard contains the right metrics for the software delivery team(s) to steer on. Although not all interviewees did answer this question, most of the stakeholders agreed with this or at least were neutral. The second question is about action-ability and usefulness of the dashboard for software delivery team(s). Despite a relatively large variance, a majority of stakeholders agrees with this statement.
The outcomes of the two interview questions on aspects that helped or hindered the project with regard to building dashboards are summarized in Table 5.
**Users like the simple set-up of the dashboard.** Interviewees mentioned as a help regarding the visualizations, that the GAMe-dashboard was set-up quite simple, including a very limited set of metrics: "It is basic and that is actually what I like. The simple, basic set that is there now, yes, it can help teams a lot" [P08]. They also emphasize the setup of the dashboard as a subset with different goals: "There is not one GAMe-dashboard. We have a dashboard on squad level, on tribe level and on domain level. But the dashboard on
Table 5: Codes related to Building Dashboards
<table>
<thead>
<tr>
<th>Helped building dashboards</th>
</tr>
</thead>
<tbody>
<tr>
<td>Dashboard contains only a limited number of metrics</td>
</tr>
<tr>
<td>Infrastructure for building dashboards</td>
</tr>
<tr>
<td>Dashboard helps, but users need to be convinced</td>
</tr>
<tr>
<td>Agile Way of Working</td>
</tr>
<tr>
<td>Hindered building dashboards</td>
</tr>
<tr>
<td>Dashboard is not used by the squads</td>
</tr>
<tr>
<td>Dashboard is not user friendly</td>
</tr>
<tr>
<td>Unclear goals of the dashboard</td>
</tr>
<tr>
<td>People’s opinions are in the way (3)</td>
</tr>
<tr>
<td>Accessibility of dashboard is too low (2)</td>
</tr>
</tbody>
</table>
squad level, yes it is widely used. And we are getting feedback on it” [P09].
The GAMe-dashboard is not used a lot. Yet, at the same time stakeholders - especially those from the squads themselves - think that the dashboard is not really used by squads: “Nobody is using it” [P03]. As a reason for this they mention the fact that the dashboard is not user friendly: “Developers want to develop. And if you create reports in a specialized application where you have to log-in and you have to find a report that runs once in a period, they are not going to do that” [P01]. Some argue whether Cognos BI is the right tool for visualizations: “The tool itself is quite limited in the visualization. So, if you want to make it more advanced, you better make your graphs in Excel, because then they become better. That is a bit of a shame for the tooling” [P08].
Some interviewees propose the idea to include visualizations in the delivery pipeline itself. For example, visualizations incorporated in ServiceNow: “Pop-ups are put-off by developers as quickly as possible, that’s no solution. But looking at your email once in the hour for five minutes, or including visualizations in ServiceNow as part of the daily stand-up would be a good solution” [P12].
People’s opinions were sometimes blocking. Some interviewees mentioned that people’s opinions were sometimes blocking progress regarding the use of dashboards: “There are also squads, and even agile coaches, who really do not believe in generic metrics because every squad has to define their own” [P09]. Or they just don’t believe in steering on metrics: “People saying “I do not believe in steering on metrics, because we, as a team, need to evolve and cooperate and discuss, more the softer approach to coaching a team” [P06].
Empirical approach was appreciated. The final topic in each interview focused on the aspects of research collaboration. A first question within this topic dealt with the perception whether performing research on the analytics behind the software delivery processes of ING helps solution delivery teams. Not as a surprise, all interviewees agreed or strongly agreed with this. Yet, it is interesting to see why they did so.
Many interviewees argued that due to research the way of working of squads can be better understood. As P08 stated: “The scientific part was really one of the real gains” [P08]. More specifically, by using real data the performance of squads could be explained: “I really believe in looking at data in this way” [P06].
Collaboration with academia is liked by many. Also the second question in this topic whether collaboration with (technical) universities helped ING to improve its research activities was agreed upon by all interviewees, although more stakeholders than the former question agreed instead of strongly agreed. As one underlying reason a fresh look at innovation was mentioned: “Fresh insights from someone who looks at it with a new, fresh look” [P02]. Another reason - mentioned by many interviewees - was that a scientific approach can help ING: “A lot of help came from Delft University of Technology, on the statistical analysis, R Studio, what packages to use” [P06]. However, a warning was mentioned too for over-complicated analysis approaches: “I think that a challenge at the same time is to make it really down to earth as well” [P08].
The results of the two interview questions about aspects helping or hindering ING to improve research on its software delivery processes are summarized in Table 6.
Table 6: Codes related to Research Collaboration
<table>
<thead>
<tr>
<th>Helped to improve research</th>
</tr>
</thead>
<tbody>
<tr>
<td>Understand the way of working of squads (8)</td>
</tr>
<tr>
<td>A scientific (evidence-based) approach (7)</td>
</tr>
<tr>
<td>Expectation that universities are ahead (6)</td>
</tr>
<tr>
<td>Real data to explain performance (6)</td>
</tr>
<tr>
<td>Hindered to improve research</td>
</tr>
<tr>
<td>Research did not solve the problem (4)</td>
</tr>
<tr>
<td>Outcomes were not discussed with the squads (3)</td>
</tr>
<tr>
<td>Focus on risk and security of a bank (2)</td>
</tr>
<tr>
<td>Adoption of scientific approach might be difficult (2)</td>
</tr>
<tr>
<td>Too early drawing conclusions (1)</td>
</tr>
</tbody>
</table>
Sharing outcomes of research is important. Apart from the aforementioned positive aspects, a number of issues were also mentioned that hindered the research, as done within the GAMe-project. Some interviewees - all from software delivery squads - mentioned the fact that the outcomes of the GAMe-project were not shared with the squads at the end of the project: “I am wondering if they ever asked feedback on the reports within teams” [P01]. Others mentioned - regardless of their agreement on the first two questions in this topic - that research in the end did not help: “It is nice that an article was written about it. But for the short term, it did not solve a problem” [P03].
Compliance, security and risk are challenges. Furthermore, the focus of a bank on risks and security might have caused challenges for the project: “A CDaaS pipeline full of security findings and risk related things, a CDaaS pipeline that is not built for reporting” [P01]. A stakeholder mentioned an risk related to research versus the delivery squads: “You now see a difference occurring between research and development within ING. For example, I see little from the research department going to production. Actually, there is a gap there, and that is not desirable either” [P12].
5 DISCUSSION
In this section we discuss the outcomes of our study, and we examine implications for industry, and threats to validity.
5.1 How to improve software analytics projects
Cherish management attention. The inventory of aspects that helped and hindered the GAMe-project, as addressed in the second research question - RQ2: What can be done to improve future implementations of software analytics solutions in such environments? - indicates that a thorough preparation of a software analytics project is an important precondition for similar projects performed in future. We argue that, although management attention was praised by many during the interviews - one thing to cherish for future projects -, a lack of steering and unanimity on the topic of software analytics might cause barriers during the course of a project.
Software analytics is typically research oriented, and thus goals maybe should be better defined in a different - hypothesis - kind of way. Zhang et al. [28] come up with some advice for software analytics activities - "create feedback loops early, with many iterations" - that might be very suitable for ING purposes too. Operations-driven goals such as 'cycle-time shortening' or 'quality improvement' might fit better in such an approach, leaving enough space for discovering new horizons with great impact.
Consider the approach on data collection and storage. We assume that a question that was raised by some of the interviewees - whether it would be a better approach to collect and store unstructured data and perform machine learning on these to look for structure and information - offers challenging yet very interesting horizons for future research.
Aim for Standardization of CD-tools and data solutions. The availability of a standardized solution within the company to set up the I4C data warehouse, in combination with good quality of data derived from the backlog management tool, was experienced by many stakeholders as a great help. The choice that was made early in the project to build queries to create metrics in a dedicated repository can easily be explained. On the other hand, data from other sources - especially CD-tools - were difficult or impossible to obtain, and were overall of a low quality. Linking data across the boundaries of different tools was therefore difficult and sometimes even impossible. Make decisions on standardization of both CD-tools and data collection and storage upfront to prevent additional work afterwards.
Use R for analysis purposes. The practice of the GAMe project showed, that as the project progressed the knowledge of R within the team had increased enormously. We therefore expect that in subsequent projects the backlog in this area will soon be eliminated, and that the benefits of R will outweigh the disadvantages.
Optimize for actionable information. One of the interviewees said it clearly: "Visualizations need to be in your face" [P08]. Apparently this was not the case in the GAMe-dashboard yet. Especially the fact that users need to log-in into a specific business intelligence tool seems one step too far for squad members. We assume that future research regarding dashboards should be focused on how to include visualizations in the daily practice of squad members, executives, and other stakeholders, and on optimization of insightful and actionable information. Potentially promising ideas came up in the interviews, such as include visualizations in the backlog management tool, that is used by the squads in their daily stand-ups. Furthermore, future research should be focused on how to measure the real impact of visualizations, instead of only looking at number of log-ins in the business intelligence tool.
Comparison of studies. We built our study on earlier work [13] in which we focused specifically on the analysis of a subset metrics. The goal of that study was to identify strong metrics. For this purpose a project - the so-called GAMe-project - was performed. In our initial study [13] we analyzed a subset of 16 metrics from 59 squads at ING. We identified two lagging metrics and assessed four leading metrics to be strong. The results of the initial study were used by ING to built a series of dashboards for squads to steer on.
In the follow-up study that we describe in this paper we evaluated the process of implementing this GAMe-project, mainly looking from the perspective of 'how did stakeholders of the project experience the implementation process?' We did not evaluate the artifacts delivered by the GAMe-project as such, but instead we asked stakeholders for their experiences and strategies for improvement of future software analytics projects.
5.2 Implications
The outcomes of our study might not simply be generalized to other environments, within or outside ING. Yet, we identify some take-away-messages that apply to implementing a software analytics solution in a CD-setting:
1) Companies should think ahead about the aims they want to achieve with software analytics, and then continuously communicate about this to all involved. In view of the investigative nature of software analytics projects - and with them the often vague objectives - we argue that it is preferable to appoint research as an objective in itself.
2) Companies that set up CD-pipelines, should give attention at an early stage to standardization of data, especially across the boundaries of different systems and tools in the pipeline. This aspect should be high on the agenda of enterprise data architects involved in such activities.
3) Visualizations should - when applicable - be incorporated in the daily work environment of delivery teams.
A fourth implication that we identified was that companies should use an empirical approach when starting a software analytics project. Collaboration with academia helps. However, continuous attention must be paid to presentation of results and explaining the scientific approach. As a note to this implication we realize that respondents might be somewhat biased by the academic partners being involved and maybe are not entirely neutral about their own role.
5.3 Threats to Validity
We see the following key limitations.
First, our study is conducted in single company. Nevertheless, we argue that the results can be used to draw many lessons, since the company in question, ING, is at the forefront of applying analytics and continuous delivery at scale in an industrial context that has a tradition of being conservative and risk averse from a technological...
point of view. This makes our results applicable to many other organizations in a similar situation.
Second, our study is based on subjective analysis, and some of the authors were involved in the project under study. We mitigated this by being open about our background, and by involving co-authors who were not involved in the project.
Third, the analytics context and the specific metrics used are specific to the company in question, and their prominence may not be widespread yet. Nevertheless, we think the factors identified in Tables 3-7 are largely independent of the actual setting, and apply to many software analytics contexts.
6 CONCLUSIONS
We studied the outcomes of a software analytics project that was performed during one year within the continuous delivery teams of ING. Within the scope of the project a dataset built from backlog management and continuous delivery data from ING was analyzed, in order to identify strong metrics: metrics with high predictive power towards a subset of lagging variables. Based on this analysis three implications for improvement strategies for squads were identified, and a dashboard was build based on these to help squads to improve their performance. To understand any causes behind the implementation of the project, we interviewed 15 stakeholders about five project related topics. Based on the interviews we identified 16 factors that helped the implementation of software analytics, and 20 factors that hindered the project.
ACKNOWLEDGMENTS
The authors would like to thank ING and all interviewees for giving the confidence to share their experiences with us. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 642954.
REFERENCES
TECHNICAL REPORT
Interview and Survey Set-up
(1) Purpose and aims of the GAMe-project
(a) To what extent do you agree with the following statement?
"The purpose and aims of the GAMe-project were completely clear to its stakeholders" strongly disagree - disagree - neutral - agree - strongly agree - don’t know
(i) Follow-up question: Can you please explain the choice made to us?
(b) To what extent do you agree with the following statement?
"The GAMe-project did achieve its aims and goals" strongly disagree - disagree - neutral - agree - strongly agree - don’t know
(i) Follow-up question: Can you please explain the choice made to us?
(c) What top-3 aspects did help the GAMe-project with regard to achieving its goals?
(d) What top-3 barriers did hinder the GAMe-project with regard to achieving its goals?
(2) Large-scale Computing: Getting the Data
(a) To what extent do you agree with the following statement?
"The data that we used within the GAMe-project was of good quality" strongly disagree - disagree - neutral - agree - strongly agree - don’t know
(i) Follow-up question: Can you please explain the choice made to us?
(b) To what extent do you agree with the following statement?
"Getting the data that we needed for the GAMe-project was easy" strongly disagree - disagree - neutral - agree - strongly agree - don’t know
(i) Follow-up question: Can you please explain the choice made to us?
(c) To what extent do you agree with the following statement?
"Preparing the data for further use within the GAMe-project (e.g. combining data from different sources, shaping of the data) was easy" strongly disagree - disagree - neutral - agree - strongly agree - don’t know
(i) Follow-up question: Can you please explain the choice made to us?
(d) What top-3 aspects did help the GAMe-project with regard to getting and preparing the data?
(e) What top-3 barriers did hinder the GAMe-project with regard to getting and preparing the data?
(3) Analysis Algorithms: Analyzing the Data
(a) To what extent do you agree with the following statement?
"When analyzing the data within the GAMe-project scale (e.g. size of the data) caused problems" strongly disagree - disagree - neutral - agree - strongly agree - don’t know
(i) Follow-up question: Can you please explain the choice made to us?
(b) To what extent do you agree with the following statement?
"When analyzing the data within the GAMe-project machine learning (e.g. building predictive models) caused problems" strongly disagree - disagree - neutral - agree - strongly agree - don’t know
(i) Follow-up question: Can you please explain the choice made to us?
(c) What top-3 aspects do you expect to help ING to improve research on its’ software delivery processes?
(d) What top-3 barriers do you expect to hinder ING to improve research on its’ software delivery processes?
## Aggregated Survey Results
**Table 7: Aggregated Survey Results**
<table>
<thead>
<tr>
<th>Count</th>
<th>Q1.1</th>
<th>Q1.2</th>
<th>Q2.1</th>
<th>Q2.2</th>
<th>Q2.3</th>
<th>Q3.1</th>
<th>Q3.2</th>
<th>Q4.1</th>
<th>Q4.2</th>
<th>Q5.1</th>
<th>Q5.2</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sum</td>
<td>12</td>
<td>12</td>
<td>12</td>
<td>13</td>
<td>10</td>
<td>7</td>
<td>4</td>
<td>7</td>
<td>11</td>
<td>14</td>
<td>13</td>
</tr>
<tr>
<td>Mean</td>
<td>3.08</td>
<td>2.92</td>
<td>3.50</td>
<td>2.54</td>
<td>2.70</td>
<td>4.43</td>
<td>3.25</td>
<td>4.00</td>
<td>3.55</td>
<td>4.57</td>
<td>4.38</td>
</tr>
<tr>
<td>Median</td>
<td>3.50</td>
<td>2.50</td>
<td>3.50</td>
<td>3.00</td>
<td>3.00</td>
<td>4.00</td>
<td>4.00</td>
<td>4.00</td>
<td>4.00</td>
<td>5.00</td>
<td>4.00</td>
</tr>
<tr>
<td>Standard Deviation</td>
<td>0.95</td>
<td>1.19</td>
<td>1.04</td>
<td>1.15</td>
<td>1.10</td>
<td>0.49</td>
<td>1.30</td>
<td>0.53</td>
<td>1.08</td>
<td>0.49</td>
<td>0.49</td>
</tr>
<tr>
<td>Percent Agree</td>
<td>50%</td>
<td>42%</td>
<td>50%</td>
<td>15%</td>
<td>30%</td>
<td>100%</td>
<td>75%</td>
<td>86%</td>
<td>73%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>Top-2-Box</td>
<td>50%</td>
<td>42%</td>
<td>50%</td>
<td>15%</td>
<td>30%</td>
<td>100%</td>
<td>75%</td>
<td>86%</td>
<td>73%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>Top-Box</td>
<td>0%</td>
<td>8%</td>
<td>17%</td>
<td>8%</td>
<td>0%</td>
<td>43%</td>
<td>0%</td>
<td>14%</td>
<td>9%</td>
<td>57%</td>
<td>38%</td>
</tr>
<tr>
<td>Net Top Box</td>
<td>0%</td>
<td>0%</td>
<td>8%</td>
<td>-15%</td>
<td>-20%</td>
<td>43%</td>
<td>-25%</td>
<td>14%</td>
<td>0%</td>
<td>57%</td>
<td>38%</td>
</tr>
<tr>
<td>Net Top-2-Box</td>
<td>8%</td>
<td>-8%</td>
<td>42%</td>
<td>-31%</td>
<td>-10%</td>
<td>100%</td>
<td>50%</td>
<td>86%</td>
<td>55%</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>Coefficient of Variation (CV)</td>
<td>31%</td>
<td>41%</td>
<td>30%</td>
<td>45%</td>
<td>41%</td>
<td>11%</td>
<td>40%</td>
<td>13%</td>
<td>30%</td>
<td>11%</td>
<td>11%</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/45856939/TUD_SERG_2018_02.pdf", "len_cl100k_base": 12355, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 43338, "total-output-tokens": 15255, "length": "2e13", "weborganizer": {"__label__adult": 0.0003421306610107422, "__label__art_design": 0.00031495094299316406, "__label__crime_law": 0.0002551078796386719, "__label__education_jobs": 0.002094268798828125, "__label__entertainment": 6.330013275146484e-05, "__label__fashion_beauty": 0.00014221668243408203, "__label__finance_business": 0.0005102157592773438, "__label__food_dining": 0.0002753734588623047, "__label__games": 0.0006656646728515625, "__label__hardware": 0.00041031837463378906, "__label__health": 0.00032067298889160156, "__label__history": 0.00020420551300048828, "__label__home_hobbies": 7.557868957519531e-05, "__label__industrial": 0.00024259090423583984, "__label__literature": 0.00024890899658203125, "__label__politics": 0.00017023086547851562, "__label__religion": 0.0002658367156982422, "__label__science_tech": 0.007740020751953125, "__label__social_life": 0.00011539459228515624, "__label__software": 0.00786590576171875, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.00023448467254638672, "__label__transportation": 0.00032520294189453125, "__label__travel": 0.00016510486602783203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63340, 0.0414]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63340, 0.13865]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63340, 0.94265]], "google_gemma-3-12b-it_contains_pii": [[0, 863, false], [863, 1041, null], [1041, 1041, null], [1041, 6475, null], [6475, 12709, null], [12709, 18919, null], [18919, 25439, null], [25439, 28158, null], [28158, 30581, null], [30581, 37070, null], [37070, 44345, null], [44345, 50897, null], [50897, 59367, null], [59367, 62207, null], [62207, 63340, null], [63340, 63340, null], [63340, 63340, null]], "google_gemma-3-12b-it_is_public_document": [[0, 863, true], [863, 1041, null], [1041, 1041, null], [1041, 6475, null], [6475, 12709, null], [12709, 18919, null], [18919, 25439, null], [25439, 28158, null], [28158, 30581, null], [30581, 37070, null], [37070, 44345, null], [44345, 50897, null], [50897, 59367, null], [59367, 62207, null], [62207, 63340, null], [63340, 63340, null], [63340, 63340, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63340, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63340, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63340, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63340, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63340, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63340, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63340, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63340, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63340, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63340, null]], "pdf_page_numbers": [[0, 863, 1], [863, 1041, 2], [1041, 1041, 3], [1041, 6475, 4], [6475, 12709, 5], [12709, 18919, 6], [18919, 25439, 7], [25439, 28158, 8], [28158, 30581, 9], [30581, 37070, 10], [37070, 44345, 11], [44345, 50897, 12], [50897, 59367, 13], [59367, 62207, 14], [62207, 63340, 15], [63340, 63340, 16], [63340, 63340, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63340, 0.21203]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
4ea62252485ffe0c23fb26aa05ea62e6e211cdd5
|
Bounded Linear Types in a Resource Semiring
Dan R. Ghica and Alex Smith
University of Birmingham, UK
Abstract. Bounded linear types have proved to be useful for automated resource analysis and control in functional programming languages. In this paper we introduce a bounded linear typing discipline on a general notion of resource which can be modeled in a semiring. For this type system we provide both a general type-inference procedure, parameterized by the decision procedure of the semiring equational theory, and a (coherent) categorical semantics. This could be a useful type-theoretic and denotational framework for resource-sensitive compilation, and it represents a generalization of several existing type systems. As a non-trivial instance, motivated by hardware compilation, we present a complex new application to calculating and controlling timing of execution in a (recursion-free) higher-order functional programming language with local store.
1 Resource-aware types and semantics
The two important things about a computer program are what it computes and what resources it needs to carry out the computation successfully. Correctness of the input-output behavior of programs has been, of course, the object of much research from various conceptual angles: logical, semantical, type-theoretical and so on. Resource analysis has been conventionally studied for algorithms, such as time and space complexity, and for programs has long been a part of research in compiler optimization.
An exciting development was the introduction of semantic [1] and especially type theoretic [14] characterizations of resource consumption in functional programming languages. Unlike algorithmic analyses, type based analysis are formal and can be statically checked for implementations of algorithms in concrete programming languages. Unlike static analysis, a typing mechanism is compositional which means that it supports, at least in principle, separate compilation and even a foreign function interface: it is an analysis based on signatures rather than implementations.
Linear logic and typing, because of the fine-grained treatment of resource-sensitive structural rules, constitute an excellent framework for resource analysis, especially in its bounded fragment [13], which can logically characterize polynomial time computation. Bounded Linear Logic (BLL) was subsequently extended to improve its flexibility while retaining poly-time [5] and further extensions to linear dependent typing were used to completely characterize complexity of evaluation of functional programs [4].
Such analyses use time as a motivating example, but can be readily adapted to other consumable resources such as energy or network traffic. What they have in common is a monadic view of resources, tracking their global usage throughout the execution of the term.
A complementary view on resource sensitivity is the co-monadic one, as advocated by Melliès and Tabareau [18]. The intuition is that the type system tracks how much resource a term needs in order to execute successfully. This is quite typical when controlling reusable resources which can be allocated and de-allocated at runtime, the typical example of which is memory, especially local (stack-allocated) memory. In fact this resource-sensitive approach is key in giving a better semantic understanding of higher-order state [17]. This view of resources is instrumental in facilitating the compilation of functional-imperative programming languages directly for resource-constrained runtimes, such as electronic circuits [8].
2 Bounded linear types over a semiring
Types are generated by the grammar \( \theta ::= \sigma \mid (J \cdot \theta) \to \theta \), where \( \sigma \) is a fixed collection of base types and \( J \in \mathcal{J} \), where \( (\mathcal{J}, +, \times, 0, 1) \) is a semiring. We will always take \( \cdot \) to bind strongest so we will omit the brackets.
Let \( \Gamma = x_1 : J_1 \cdot \theta_1, \ldots, x_n : J_n \cdot \theta_n \) be a list of identifiers \( x_i \) and types \( \theta_i \), annotated with semiring elements \( J_i \). Let \( \text{fv}(M) \) be the set of free variables of term \( M \), defined in the usual way. The typing rules are:
- **Identity**
\[
\frac{x : J \cdot \theta}{\Gamma \vdash x : \theta}
\]
- **Weakening**
\[
\frac{\Gamma \vdash M : \theta \quad \Gamma, x : J \cdot \theta' \vdash \theta}{\Gamma, x : J \cdot \theta' \vdash M : \theta}
\]
- **Abstraction**
\[
\frac{\Gamma \vdash M : J \cdot \theta \to \theta' \quad \Gamma' \vdash N : \theta}{\Gamma, \lambda x. M : J \cdot \theta \to \theta' \vdash \theta'}
\]
- **Application**
\[
\frac{\Gamma \vdash M : J \cdot \theta \to \theta' \quad \Gamma' \vdash N : \theta}{\Gamma, J \cdot \Gamma' \vdash MN : \theta'}
\]
- **Contraction**
\[
\frac{\Gamma \vdash x : J \cdot \theta \quad \Gamma, x : J \cdot \theta \vdash M : \theta'}{\Gamma \vdash x : (J + K) \cdot \theta \vdash M[x/y] : \theta'}
\]
In **Weakening** we have the side condition \( x \notin \text{fv}(M) \), and in **Application** we require \( \text{dom}(\Gamma) \cap \text{dom}(\Gamma') = \emptyset \). In the **Application** rule we use the notation
\[
J \cdot (x_1 : K_1 \cdot \theta_1, \ldots, x_n : K_n \cdot \theta_n) \triangleq (J \times K_1) \cdot \theta_1, \ldots, x_n : (J \times K_n) \cdot \theta_n
\]
Note. For the sake of simplicity we take operations in the semiring to be resolved syntactically within the type system. So types such as \( 2 \cdot \cdot A \) and \( (1 + 1) \cdot \cdot A \) are taken to be syntactically equal. In the context of type-checking this is reasonable because semiring actions are always constants that the type-checker can calculate with. If we were to allow resource variables, i.e. some form of resource-based polymorphism (cf. [5]) then a new structural rule would be required to handle type congruences induced by the semiring theory:
\[ \Gamma, x : J \cdot \theta' \vdash M : \theta \quad J = J' \quad \text{Semiring} \]
But in our current system this level of formalization is not worth the complication.
2.1 Examples
**Bounded Linear Logic.** If we take \( J \) to be resource polynomials we obtain BLL. A *monomial* is any finite product of binomial coefficients \( \prod_{i=1}^{n} \binom{n}{i} \); a resource polynomial is a finite sum of monomials. They are closed under sum and product and have a semiring structure. The *Axiom* of BLL is not quite the same as ours, as we require a unit action on the type of the variable, whereas in BLL any bound can be introduced, hence the whimsical name of *Waste of Resources* for the BLL Axiom. In our system a wasteful axiom is admissible only if a resource can be decomposed as a sum involving the unit resource, by using a combination of contraction and weakening.
\[
\frac{x : 1 \cdot \theta \vdash x : \theta}{y : J \cdot \theta, x : 1 \cdot \theta \vdash x : \theta} \quad \frac{x : (J + 1) \cdot \theta \vdash x : \theta}
\]
The intuition of this restriction is that we need *at least* an unit of resource in order to use \( x \).
**Syntactic Control of Concurrency (SCC).** It is possible to use a comonadic notion of resource to bound the number of threads used by a parallel programming language [10]. This has the advantage of identifying programs with finite-state models, with applications in automated verification [9] and in hardware synthesis [11]. If we instantiate \( J \) to the semiring of natural numbers we obtain SCC. However, SCC includes an additive conjunction rule to model *sequentiality*:
\[
\frac{\Gamma \vdash M : \theta \quad \Gamma \vdash N : \theta'}{\Gamma \vdash \langle M, N \rangle : \theta \times \theta'}
\]
This allows to distinguish between sequential and concurrent programming language constants, e.g.: \( \text{seq} : !1 \cdot \text{com} \times !1 \cdot \text{com} \rightarrow \text{com} \) versus \( \text{par} : !1 \cdot \text{com} \rightarrow !1 \cdot \text{com} \rightarrow \text{com} \). This is an idea borrowed from Reynolds’s *Syntactic Control of Interference* (SCI) [23].
This distinction between sequential and parallel composition becomes interesting when contraction is involved, e.g.
\[
\lambda x. \text{seq}(x, x) : !1 \cdot \text{com} \rightarrow \text{com} \quad \text{vs.} \quad \lambda x. \text{par} x x : !2 \cdot \text{com} \rightarrow \text{com}. \quad \text{(2)}
\]
Note that SCC uses the notation \(!_k \rightarrow \) instead of \( k \cdot \rightarrow \) to indicate resource actions.
Tagged Control of Concurrency (TCC). SCI is akin to SCC where all bounds are set to 1. This means that in SCI the first term in Eqn. 2 can be typed, but the second cannot. Both SCI and SCC are complicated semantically by the presence of the extra additive conjunction because it lacks an adjoint exponential. The complication is also syntactic as the two composition operators have peculiarly different signatures (uncurried vs. curried).
Completing the syntactic and semantic tableau by providing both conjunctions with exponentials leads to Bunched Typing [21]. However, it is possible to have an SCI-like type system without using both additive and multiplicative conjunctions, but harnessing the power of an expressive enough set of resources. The elements of the semiring are a system of tags corresponding, intuitively, to run-time locks that need to be acquired. A notion of safety is introduced for tags, corresponding to the requirement that locks cannot be grabbed more than once. The restrictions on terms of an SCI-like type system can be recovered by imposing the restriction that all tags are safe. The two command compositions, sequential and parallel, have types:
\[
\text{seq}_{\tau_1, \tau_2} : \tau_1 \cdot \text{com} \rightarrow \tau_2 \cdot \text{com} \rightarrow \text{com} \quad \text{vs.} \quad \text{par}_\tau : \tau \cdot \text{com} \rightarrow \tau \cdot \text{com} \rightarrow \text{com},
\]
for any (safe) tags \(\tau, \tau_1, \tau_2\) such that \(\tau_1 + \tau_2\) is also safe. Note that the two command compositions (sequential and parallel) now have the same type skeleton \((\text{com} \rightarrow \text{com} \rightarrow \text{com})\) and no extra rules are required. The example terms in Eqn. 2 can be written in a more uniform way as:
\[
\lambda x. x \cdot (\tau_1 + \tau_2) \cdot \text{com} \rightarrow \text{com} \quad \text{vs.} \quad \lambda x. x \parallel (\tau + \tau) \cdot \text{com} \rightarrow \text{com}.
\]
As in SCI, the second one is not a valid term, as the tag \((\tau + \tau)\) cannot be safe.
The uniformity of the type skeleton is quite important for practical usage. Under the original SCI, functions that need their arguments to share information must use an uncurried signature, as opposed to functions that disallow that. A syntactic distinction that poses a sometimes difficult burden on the programmer. By contrast, in TCC the tags are inferred automatically by the compiler.
A full description of the type system, its game semantics and an application to hardware compilation is forthcoming [24].
2.2 Modularity
Given two semirings \(J, J'\) their Cartesian product \(J \times J'\) is also a semiring with multiplicative unit \((1, 1')\), additive unit \((0, 0')\) and addition and multiplication defined component-wise. Because there are many different resources one might want to track in the type system (time, space, energy, bandwidth, etc.) with significantly different properties, the fact that they can be easily combined in a modular way can be a quite appealing feature.
2.3 Type inference
We present a bound inference algorithm for the abstract system which works by creating a system of constraints to be solved, separately, by an SMT-solver that
can handle the equational theory of the resource semiring. In the type grammar, for the exponential type \( J \cdot \theta \rightarrow \theta \) we allow \( J \) to stand for a concrete element of \( \mathcal{J} \) or for a variable in the input program; the bound-inference algorithm will produce a set of constraints such that every model of those constraints gives rise to a typing derivation of the program without resource variables as variables are instantiated to suitable concrete values. Type judgments have form \( \Gamma \vdash M : \theta \triangleright \chi \), where \( \chi \) is a set of equational constraints in the semiring. We also allow an arbitrary set of constants \( k : \theta \), which will allow the definition of concrete programming languages based on the type system. We allow each constant \( k \) to introduce arbitrary resource constraints \( \chi_k \)
\[
\begin{align*}
\Gamma \vdash M : \theta & \triangleright \chi \\
\Gamma, x : J \cdot \theta' \vdash M : \theta & \triangleright \chi \\
\Gamma \vdash \lambda x. M : J \cdot \theta \rightarrow \theta' & \triangleright \chi \\
\Gamma, x : J_1 \cdot \theta', y : J_2 \cdot \theta'' \vdash M : \theta & \triangleright \chi \\
\Gamma \vdash M : J \cdot \theta \rightarrow \theta' & \triangleright \chi \\
\Gamma, x_1 : J_1, \ldots, x_n : J, \theta : N \cdot \theta'' & \triangleright \chi' \\
\end{align*}
\]
The constraints of shape \( \theta_1 = \theta_2 \) are to be interpreted in the obvious way, as the set of pairwise equalities between resource bounds used in the same position in the two types:
\[
\theta = \theta \quad \overset{\text{def}}{=} \emptyset
\]
\[
J_1 \cdot \theta_1 \rightarrow \theta_1' = J_2 \cdot \theta_2 \rightarrow \theta_2' \quad \overset{\text{def}}{=} \{ J_1 = J_2 \} \cup \theta_1 = \theta_2 \cup \theta_1' = \theta_2'.
\]
If \( \mathcal{M} \) is a model, i.e. a function mapping variables to concrete values, by \( \Gamma[\mathcal{M}] \) we write the textual substitution of each variable by its concrete value in a sequent. The following is then true by construction:
**Theorem 1.** If \( \Gamma \vdash M : \theta \triangleright \chi \) and \( \mathcal{M} \) is a model of the system of constraints \( \chi \) in the semiring \( \mathcal{J} \) then \( (\Gamma \vdash M : \theta)[\mathcal{M}] \) is derivable.
### 2.4 Categorical semantics
We give an abstract framework suitable for interpreting the abstract type system of Sec. 2. Up to this point the calling discipline of the type system was not relevant, as there are no side-effects, but for giving an interpretation we need to make this choice. In order to remain relevant to our motivating application, hardware compilation, we shall choose the call-by-name mechanism, which is used by the Geometry of Synthesis compiler.
We require two categories. We interpret *computations* in a symmetric monoidal closed category \( (\mathcal{G}, \otimes, I) \) in which the tensor unit \( I \) is a terminal object. Let \( \alpha \) be the *associator* and \( \lambda, \rho \) be the right and left *unitors*. We write the unique morphism into the terminal object as \( !_A : A \rightarrow I \). Currying is the isomorphism
\[
A_{A,B,C} : A \otimes B \rightarrow C \simeq A \rightarrow B \rightarrow C,
\]
and the evaluation morphism is $eval_{A,B} : A \otimes (A \to B) \to B$.
We interpret resources in a category $\mathcal{R}$ with two monoidal tensors $(\oplus, 0)$ and $(\otimes, 1)$ such that:
\[
\begin{align*}
J \otimes (K \oplus L) &\simeq J \otimes K \oplus J \otimes L \quad \text{(r-distributivity)} \\
(J \oplus K) \otimes L &\simeq J \otimes L \oplus K \otimes L \quad \text{(l-distributivity)} \\
J \otimes 0 &\simeq 0 \otimes J \simeq 0 \quad \text{(zero)}.
\end{align*}
\]
The action of resources on computations is modeled by a functor $\cdot : \mathcal{R} \times \mathcal{G} \to \mathcal{G}$ such that the following natural isomorphisms must exist:
\[
\begin{align*}
\delta_{J,K,A} : J \cdot A \otimes K \cdot A &\simeq (J \oplus K) \cdot A \quad \text{(4)} \\
\pi_{R,R',A} : R \cdot (R' \cdot A) &\simeq (R \circ R') \cdot A \quad \text{(5)} \\
\zeta_A : 0 \cdot A &\simeq I \quad \text{(6)} \\
\iota_A : 1 \cdot A &\simeq A \quad \text{(7)}
\end{align*}
\]
and the following diagrams commute:
\[
\begin{array}{ccc}
J \cdot A \otimes K \cdot A \otimes L \cdot A &\xrightarrow{\delta_{J,K,A} \otimes 1 \cdot L \cdot A} & (J \oplus K) \cdot A \otimes L \cdot A \quad \text{(8)} \\
J \cdot A \otimes (K \oplus L) \cdot A &\xrightarrow{\delta_{J,K\oplus L \cdot A}} & (J \oplus K \oplus L) \cdot A \\
J \cdot A \otimes K \cdot A &\xrightarrow{\delta_{J,K,A}} & (J \oplus K) \cdot A \quad \text{(9)} \\
J \cdot B \otimes K \cdot B &\xrightarrow{\delta_{J,K,B}} & (J \oplus K) \cdot B
\end{array}
\]
Natural isomorphism $\pi$ (Eqn. 5) reduces successive resource actions on computations to a composite resource action, corresponding to the product of the semiring. Natural isomorphism $\delta_{J,K,A}$ in Eqn. 4 is a “quantitative” version of the diagonal morphism in a Cartesian category, which collects the resources of the contracted objects. The commuting diagram in Eqn. 8 stipulates that the order in which we use the “quantitative” diagonal order to contract several objects is irrelevant, and the commuting diagram in Eqn. 9 gives a “quantitative” counterpart for the naturality of the diagonal morphism. Finally, Eqns. 6 and 7 show the connection between the units of the tensors involved.
A direct consequence of the naturality of $\rho$ and $I$ being terminal, useful for proving coherence, is:
**Proposition 1.** The following diagram commutes in the category $\mathcal{G}$ for any $f : B \to C$:
\[
\begin{array}{ccc}
B \otimes A &\xrightarrow{1_B \otimes A} & B \otimes I \xrightarrow{\rho_B} B \\
&\downarrow{f \otimes 1_A} & \downarrow{f} \\
C \otimes A &\xrightarrow{1_C \otimes A} & C \otimes I \xrightarrow{\rho_C} C
\end{array}
\]
Computation is interpreted in a canonical way in the category $G$. Types are interpreted as objects and terms as morphisms, with
$$[[J; \theta \to \theta']]_G = ([[J]]_R [[\theta]]_G) \to [[\theta']]_G.$$
From now on, the interpretation of the resource action is written as $J$ instead of $[[J]]_R$ when there is no ambiguity and the subscript of $[[\cdot]]_G$ is left implicit.
Environments are interpreted as
$$[[\Gamma]] = [x_1: J_1; \theta_1, \ldots, x_n: J_n; \theta_n] = J_1[[\theta_1]] \otimes \cdots \otimes J_n[[\theta_n]].$$
Terms are morphisms in $G$, $[[\Gamma \vdash M: \theta]]$ defined as follows:
$$[[x : 1; \theta \vdash x : \theta]] = t_{[\theta]}$$
$$[[\Gamma, x : J; \theta \vdash M: \theta']] = 1_{[[\Gamma]]} \otimes J_{[\theta]}; [[\Gamma \vdash M: \theta]]$$
$$[[\Gamma \vdash \lambda x.M : J; \theta \to \theta']] = \Lambda_{J,[\theta]}([[\Gamma, x : J; \theta \vdash M: \theta']])$$
$$[[\Gamma, J; \Gamma' \vdash FM : \theta']] = ([[\Gamma' \vdash F : J; \theta \to \theta']] \otimes J_{[[\Gamma]]}[[\Gamma' \vdash M: \theta]]); \text{eval}_{J,[\theta],[\theta']}$$
$$[[\Gamma, x : (J + K); \theta \vdash M[x/y] : \theta']] = 1_{[[\Gamma]]} \otimes J_{[\theta]}; [[\Gamma, x : J; \theta, y : K; \theta \vdash M : \theta]].$$
### 2.5 Coherence
The main result of this section is the coherence of typing. The derivation trees are not unique because there is choice in the use of the weakening and contraction rules. Since meaning is calculated on a particular derivation tree we need to show that it is independent of it. The coherence conditions for the monoidal category are standard [15], but what is interesting and new is that resource manipulation does not break coherence. The key role is played by the isomorphism $\delta$ which is the resource-sensitive version of contraction, which can combine or de-compose resources without loss of information.
The key idea of the proof is that we can bring any derivation tree to a standard form (which we call stratified), with weakening and contraction performed as late as possible. A combination of weakenings and contractions can bring a term to linear form, which has a uniquely determined derivation tree. The key result is Lem. 3 which stipulates that the order in which contractions and weakenings are performed is irrelevant.
The following derivation rules is admissible because it is a chain of contractions and weakenings, followed by an abstraction:
$$\frac{x_1 : J_1; \theta, \ldots, x_n : J_n; \theta, \Gamma \vdash M : \theta'}{\Gamma \vdash \lambda x.M[x/x_1] : (J_1 + \cdots + J_m); \theta \to \theta'} \text{ ACW}$$
where $x, x_j \notin fv(M)$, for some $1 \leq m \leq n$, and all $1 \leq i \leq m, m \leq j \leq n$. Variables $x_1, \ldots, x_m$ are contracted into a fresh variable $x$ and dummy variables $x_{m+1}, \ldots, x_n$ can be added.
We denote sequents $\Gamma \vdash M : \theta$ by $\Sigma$ and derivation trees by $\nabla$. Let $\Lambda(\Sigma) \in \{id, wk, ab, ap, co, acw\}$ be a label on the sequents, indicating whether a sequent
is derived using the rule for identity, weakening, etc. If a sequent \( \Sigma = \Gamma \vdash M : \theta \) is the root of a derivation tree \( \nabla \) we write it \( \Sigma^\nabla \) or \( \Gamma \vdash^\nabla M : \theta \).
We say that a sequent is \textit{linear} if each variable in the environment \( \Gamma \) occurs freely in the term \( M \) exactly once.
**Definition 1.** For a linear sequent, we call a stratified derivation tree the unique derivation tree produced by the following deterministic algorithm.
\( MN: \) The only possible rule is Application and, since the judgement \( \Gamma, J \cdot \Delta \vdash MN : \theta \) is about a linear term, both \( \Gamma \vdash M : J \cdot \theta' \leftrightarrow \theta \) and \( \Delta \vdash N : \theta' \) are linear and there is only one way \( \Gamma \) can be split, unless \( J = 0 \). In this case any resource actions in \( \Delta \) can be chosen, since they will be zeroed by the action of \( J \). To keep the algorithm deterministic we choose zeroes. This ensures that every resource action in the derivation of \( N \) is also 0.
\( \lambda x.M: \) We use AWC to give each occurrence of \( x: J \cdot A \) in \( M \) a new (fresh) name \( x_i: J_i: A \). Each \( J_i \) is uniquely determined by the context in which \( x_i \) occurs. Note that it is necessary that \( \sum J_i \leq J \), otherwise the term cannot be typed.
\( x: \) The only possible rule is Weakening.
**Lemma 1.** If a linear sequent has a derivation tree then it has a (unique) stratified derivation tree. Moreover, all the sequents occurring in the tree are linear.
**Proof.** The proof is almost immediate (by contradiction). Linear derivations cannot use weakening or contractions except where they can be replaced by AWC, so to construct a stratified tree, we just need to normalise uses of 0.
We now show that any derivation can be reduced to a stratified derivation through applying a series of meaning-preserving tree transformations, which we call \textit{stratifying rules}.
The Weakening rule commutes trivially with all other rules except Identity, Abstraction and Contraction, if they act on the weakened variable. In these cases we replace the sequence of Weakening followed by Abstraction and/or Contraction with the combined \textit{AWC} rule. The more interesting tree transformation rules are for Contraction.
Contraction commutes with Application. There are two pairs of such rules, one for pushing down contraction in the function and one for pushing down contraction in the argument:
\[
\begin{align*}
\Gamma, x : J \cdot \theta, y : J' \cdot \theta \vdash F : J_1 \cdot \theta_1 & \rightarrow \theta_2 \\
\Gamma, x : (J + J') \cdot \theta \vdash F[x/y] : J_1 \cdot \theta_1 & \rightarrow \theta_2 \\
& \Gamma' \vdash M : \theta_1 \\
\hline
\Gamma, x : (J + J') \cdot \theta, J_1 \cdot \Gamma' \vdash F[x/y]M : \theta_2
\end{align*}
\]
\[
\begin{align*}
\Gamma, x : J \cdot \theta, y : J' \cdot \theta \vdash F : J_1 \cdot \theta_1 & \rightarrow \theta_2 \\
& \Gamma' \vdash M : \theta_1 \\
\hline
\Gamma, x : J \cdot \theta, y : J' \cdot \theta, J_1 \cdot \Gamma' \vdash FM : \theta_2 \\
\Gamma, x : (J + J') \cdot \theta, J_1 \cdot \Gamma' \vdash (FM)[x/y] : \theta_2
\end{align*}
\]
Similarly for pushing down contraction from the argument side and similarly for rules involving weakening:
\[\Gamma \vdash F : J_1 \cdot \theta_1 \rightarrow \theta_2 \quad \Gamma, x : (J_1 \times (J + J')) \cdot \theta, \Gamma' \vdash F[M[x/y]] : \theta_2\]
Contraction also commutes with Abstraction, if the contracted and abstracted variables are distinct, \(x \neq y\):
\[\Gamma, x : J \cdot \theta, x' : J' \cdot \theta, y : K \cdot \theta' \vdash M : \theta''\]
\[\Gamma, x : (J + J') \cdot \theta, \lambda y. M[x/x'] : K \cdot \theta' \rightarrow \theta''\]
The rule for swapping contraction and weakening is (types are obvious and we elide them for concision):
\[\Gamma, y, z \vdash M \quad \rightarrow \quad \Gamma, y \vdash M[y/z] \quad \boxed{WC}\]
\[\Gamma, y, x \vdash M[y/z] \quad \boxed{Z\Omega}\]
The final rule is to zero-out the resource actions of free identifiers used in derivations of functions with zero-types.
\[\Gamma \vdash M : 0 \cdot \theta \rightarrow \theta' \quad \Gamma' \vdash N : \theta' \quad \Rightarrow \quad \Gamma, 0 \cdot \Gamma' \vdash MN : \theta'\]
Proposition 2. The following judgments are syntactically equal:
\[\Gamma, x : \theta, \Gamma' \vdash F[x/y]M : \theta' = \Gamma, x : \theta, \Gamma' \vdash (FM)[x/y] : \theta',\]
\[\Gamma, x : (J_1 \times (J + J')) \cdot \theta, \Gamma' \vdash F[M[x/y]] : \theta_2\]
\[= \Gamma, x : (J_1 \times J + J_1 \times J') \cdot \theta, \Gamma' \vdash (FM)[x/y] : \theta_2,\]
\[\Gamma, x : (J + J') \cdot \theta \vdash \lambda y. M[x/x'] : K \cdot \theta' \rightarrow \theta'\]
\[= \Gamma, x : (J + J') \cdot \theta \vdash (\lambda y. M)[x/x'] : K \cdot \theta' \rightarrow \theta''.\]
Proof. The proof of the first two statements is similar. Because Application is linear it means that an identifier \(y\) occurs either in \(F\) or in \(M\), but not in both.
Therefore \((FM)[x/y]\) is either \(F(M[x/y])\) or \((F[x/y])M\). This makes the terms syntactically equal. In any semiring, \(J_1 \times (J + J') = J_1 \times J + J_1 \times J'\), which makes the environments equal. Note that semiring equations are resolved syntactically in the type system, as pointed out at the beginning of this section. For the third statement we know that \(x \neq y\).
**Proposition 3.** If \(\nabla\) is a derivation and \(\nabla'\) is a tree obtained by applying a stratifying rule then \(\nabla'\) is a valid derivation with the same root \(\Sigma^\nabla = \Sigma^{\nabla'}\) and the same leaves.
**Proof.** By inspecting the rules and using Prop. 2.
Most importantly, stratifying transformation preserve the meaning of the sequent.
**Lemma 2.** If \(\nabla \Rightarrow \nabla'\) is a stratifying rule then \(\llbracket \Sigma^{\nabla} \rrbracket = \llbracket \Sigma^{\nabla'} \rrbracket\).
**Proof.** By inspecting the rules. Prop. 3 states that the root sequents are equal and the trees are well-formed. For WC (and the other rules involving the stratification of Weakening) this is an immediate consequence of Prop. 1. For AL and AR the equality of the two sides is an immediate consequence of symmetry in \(\mathcal{G}\) and the functoriality of the tensor \(\otimes\). For CA the equality of the two sides is an instance of the general property in a symmetric monoidal closed category that \(f;\Lambda(g) = \Lambda((f \otimes 1_{B'})g)\) for any \(A \rightarrow B, B \otimes B' \rightarrow C\). For ZO the equality is given by the (zero) isomorphism in the resource category and the \(\zeta\) isomorphism (Eqn. 6).
**Lemma 3.** If \(\nabla, \nabla'\) are derivation trees consisting only of Contraction and Weakening with a common root \(\Sigma\) then \(\llbracket \Sigma^{\nabla} \rrbracket = \llbracket \Sigma^{\nabla'} \rrbracket\).
**Proof.** Weakening commutes with any other rule (Prop. 1). Changing the order of multiple contraction of the same variable uses the associativity coherence property in Eqn. 8. Changing the order in which different variables are contracted uses the naturality coherence property in Eqn. 9.
The lemma above ensures that the AWC rule is itself semantically coherent.
**Lemma 4.** If \(\nabla\) is a derivation there exists a stratified derivation tree \(\nabla'\) which can be obtained from \(\nabla\) by applying a (finite) sequence of stratifying tree transformations. Moreover, \(\llbracket \Sigma^{\nabla} \rrbracket = \llbracket \Sigma^{\nabla'} \rrbracket\).
**Proof.** The stratifying transformations push contraction and weakening through any other rules and the derivation trees have finite height. If a contraction or weakening cannot be pushed through a rule it means that the rule is an abstraction on the variable being contracted or weakened, and we replace the rules with AWC. For the weakening and contractions pushed to the bottom of the tree the order is irrelevant, according to Lem. 3 The result is a stratified tree. Next we apply induction on the chain of stratifying rules using Lem. 2 for every rule application and Lem. 3 for the final chain of weakening and contractions.
Theorem 2 (Coherence). For any derivation trees $\nabla_1, \nabla_2$ with common root $\Sigma$, $\llbracket \Sigma \nabla_1 \rrbracket = \llbracket \Sigma \nabla_2 \rrbracket$.
Proof. Using Lem. 4, $\nabla_1, \nabla_2$ must be effectively stratifiable into trees $\nabla'_1, \nabla'_2$ with the same root and $\llbracket \Sigma \nabla'_i \rrbracket = \llbracket \Sigma \nabla'_i \rrbracket$ for $i = 1, 2$. We first reduce $\Sigma \nabla_i$ to a linear form (using contractions and weakenings) then use Lem. 1. The only difference between $\nabla'_1, \nabla'_2$ are the order of the abstractions and permutations at the bottom of the tree, and the choice of names of variables, both of which are semantically irrelevant (Lem. 3).
3 Case study: timing analysis
In the sequel we will present a more complex resource semiring which we shall use in giving a precise type-level analysis of timing. The interpretation of the type $J \cdot \theta \rightarrow \theta'$ is that the function needs a schedule of execution $J$ for the argument in order to execute. Again, note the comonadic interpretation of resources. This type system is interesting in its own right, as a way of capturing timing at the level of the type system. A full blown analysis for timing bounds, as part of a more general approach to certifying resource bounds, has been given before using dependent types [3]. However, this approach only automates the certification of the bounds whereas we fully automate the process, at the expense of less precision.
A schedule $J = [x_1, x_2, \ldots, x_n]$ is a multiset of stages $x_i$, which are one-dimensional contractive affine transformations over $\mathbb{R}$. This means that our reading of time is a relative one. A contractive affine transformation is represented as $x_{s, p} = \begin{pmatrix} s & p \\ 0 & 1 \end{pmatrix}$, where $0 \leq s \leq 1$ and $0 \leq s + p \leq 1$. The value $s$ is a scaling factor relative to the unit interval, and $p$ is a phase change, i.e. a delay from the time origin. For example, $x_{25, 5} = \begin{pmatrix} 25 & 5 \\ 0 & 1 \end{pmatrix}$ represents a stage that starts when $\frac{1}{2}$ of the duration has elapsed and lasts for $\frac{1}{4}$ the duration relative to which we are measuring. Some extreme values are $\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$ which overlaps perfectly to the reference interval or $\begin{pmatrix} 0 & 1 \\ 0 & 1 \end{pmatrix}$ which starts at the end of the reference interval and has zero duration (is instantaneous).
For an example of how schedules are interpreted as type annotations, the type $[x_{5,0}, x_{5,5}] \rightarrow \text{com} \rightarrow \text{com}$ is of a function that executes its argument twice. First argument starts instantly and the second starts half-way through its execution; both take $\frac{1}{2}$ of the execution.
In mathematical terms, schedules are the semigroup semiring of one-dimensional contractive affine transformations, usually written as $J = \mathbb{N}[\text{Aff}_1]$. This is a canonical construction which has the mathematical properties we desire.
Contractive affine transformations enable composition of timed functions in a natural way, because such transformations compose, by matrix product. Com-
posing time represented as absolute intervals is perhaps possible, but it complicates the rules of the type system significantly. By using relative timing the rules of the system are clean, at the expense of having a rather complicated final step of elaborating relative into absolute timings for a closed term (i.e. a program), as it will be seen in Sec. 3.3.
When we refer to the timing of a computation, and it is unambiguous from context, we will sometimes use just \( x \) to refer to its action on the unit interval \( u = [0,1] \). For example, if we write \( x \subseteq x' \) we mean \( x\cdot u \subseteq x'\cdot u \), i.e. \( [p,s+p] \subseteq [p',s'+p'] \), i.e. \( p \geq p' \) and \( s + p \leq s' + p' \). If we write \( x \leq x' \) we mean the Egli-Milner order on the two intervals, \( x\cdot u \leq x'\cdot u \), i.e. \( p \leq p' \) and \( s + p \leq s' + p' \). If we write \( x \cap x' = \emptyset \) we mean the two intervals are disjoint, \( x\cdot u \cap x'\cdot u = \emptyset \), etc.
Contractive affine transformations form a semigroup with matrix product as multiplication and unit element \( I \triangleq \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \). The semiring of a semigroup \((G, \times, I)\) is a natural construction from any semiring and any semigroup. In our case the semiring is natural numbers (\( \mathbb{N} \)), so the semigroup semiring is the set of finitely supported functions \( J : \text{Aff}_1 \to \mathbb{N} \) with
\[
0(x) = 0 \\
1(x) = \begin{cases} 1 & \text{if } x = I \\ 0 & \text{otherwise} \end{cases} \\
(J + K)(x) = J(x) + K(x) \\
(J \times K)(x) = \sum_{y,z \in \text{Aff}_1, \ y \times z = x} J(y) \times K(z).
\]
This is isomorphic to finite multisets over \( \text{Aff}_1 \). We use interchangeably whichever representation is more convenient.
### 3.1 A concrete programming language
A concrete programming language is obtained by adding a family of functional constants in the style of Idealized Algol [22]. We take commands and integer expressions as the base types, \( \sigma ::= \text{com} \mid \text{exp} \).
Ground-type constants are just \( n : \text{exp} \) and \( \text{skip} : \text{com} \). Ground-type operators are provided with explicit timing information. For example, for commands we have a family of timed composition operators (i.e. schedulers):
\[
\text{comp}_{x,y} : [x]\cdot \text{com} \to [y]\cdot \text{com} \to \text{com}.
\]
Both sequential and parallel composition are subsumed by the timed scheduler. Sequential composition is a scheduler in which the arguments are non-overlapping, with the first argument completing before the second argument starts: \( \text{seq}_{x,y} = \text{comp}_{x,y} \) where \( x \leq y \) and \( x \cap y = \emptyset \) (which we write \( x < y \)). Parallel composition is simply \( \text{par}_x = \text{comp}_{x,x} \), with both arguments initiating and completing execution at the same time. Schedulers that are neither purely sequential nor parallel, but a combination thereof, are also possible.
Arithmetic operators and branching (if) are also given explicit timings.
\[ \text{op}_{x,y} : [x] \cdot \exp \rightarrow [y] \cdot \exp \rightarrow \exp, \]
\[ \text{if}_{x,y} : [x] \cdot \exp \rightarrow [y] \cdot \sigma \rightarrow [y] \cdot \sigma \rightarrow \sigma, \quad x < y. \]
Note that branching has an additional sequentiality constraint which stipulates that the guard must execute before the branches are allowed to start executing. This is not a type-related constraint, but a language-level constraint.
Assignable variables are handled by separating read and write access, as is common in Idealized Algol (IA). Let the type of acceptors be defined (syntactically) as the family \( \text{acc}_w \triangleq [w] \cdot \exp \rightarrow \text{com} \). There is no stand-alone var type, instead the reader and writers to a variable are bound to the same memory location by a block variable constructor with signature:
\[ \text{new}_{\sigma,J,w_1,\ldots,w_n} : (J \cdot \exp \rightarrow \text{acc}_{w_1} \rightarrow \cdots \rightarrow \text{acc}_{w_n} \rightarrow \sigma) \rightarrow \sigma, \quad \sigma \in \{\exp, \text{com}\}. \]
The asymmetric treatment of readers and acceptors is a consequence of using call-by-name: the read operation is an expression thunk with no arguments, but the acceptor needs to evaluate its argument which can take an arbitrary amount of time. For programmer convenience var-typed identifiers can be sugared into the language but, because the read and write schedules of access need to be maintained separately, the contraction rules become complicated (yet routine) so we omit them here.
**Example 1.** The timings of the IA program \( \text{new} v. v := !v + 1 \) can be captured by this typing system. First let us write it in a functional-style syntax where the occurrences of \( v \) are linearized: \( \text{new}(\lambda v_1 \lambda v_2. v_2(\text{add} v_1 1)) \). The type of this linearized local-variable binder is \( \text{new} : (\exp \rightarrow \text{acc} \rightarrow \text{com}) \rightarrow \text{com} \).
The next step is to determine schedules of execution for the constants. The typing derivation is
\[
\begin{align*}
\vdash v_1 : \exp & \vdash v_1 : \exp \\
\vdash \text{add}_{x,y} : [x] \cdot \exp \rightarrow [y] \cdot \exp \rightarrow \exp \\
\vdash v_1 : [x] \cdot \exp & \vdash \text{add}_{x,y} v_1 : [y] \cdot \exp \rightarrow \exp \\
\vdash v_2 : \text{acc}_w, v_1 : [w \times x] \cdot \exp & \vdash v_2(\text{add}_{x,y} v_1 1) : \text{com} \\
\vdash \lambda v_1 \lambda v_2. v_2(\text{add}_{x,y} v_1 1) : [w \times x] \cdot \exp & \rightarrow \text{acc}_w \rightarrow \text{com} \\
\end{align*}
\]
for any stages \( x, y, w \). To complete the term we need to apply the binder \( \text{new}_{\text{com}, [w \times x], w} \).
Written in a fully sugared notation, this term would be: \( \text{new}_{\text{com}, [w \times x], w} x := !x + x, y 1 \). We will see later how to choose sensible concrete values for the stages.
### 3.2 Type inference for pipelining
Computing such detailed timings can perhaps be useful when doing real-time computation using programs with higher-order functions without recursion, as this language is expressive enough for implementing, for example, certain digital...
signal processing algorithms. However, we will look at a different application motivated by hardware compilation: imposing a pipelining discipline via the type system. Pipelining is important because it allows the concurrent use of a hardware component and thus reduces the overall footprint of a program compiled in hardware. Without it, any concurrently used component is systematically replicated, a process called serialization [11].
The constraints imposed by the typing system, as seen in Example 1 can be quite loose, and there can be broad choice in selecting concrete values for the stages. In some sense this is a bug, because there can be no principal type, but we will turn it into a handy feature by introducing extra constraints motivated by the platform to which we are compiling the program, in this case one relying on pipelining. Thus the overall system of constraints will contain type, language and platform constraints independently of each other, a pleasant degree of modularity. The rest of the section describes the type inference algorithm.
First an observation: the general recipe from Sec. 2.3 cannot be immediately applied because there is no (off-the-shelf) SMT solver for $\mathbb{N}[\text{Aff}_1]$. We need to run the SMT in two stages: first we calculate the sizes of the multiset (as in SCC inference), which allows us to reduce constraints in $\mathbb{N}[\text{Aff}_1]$ to constraints in $\text{Aff}_1$. Then we map equations over $\text{Aff}_1$ into real-number equations, which can be handled by the SMT solver. There is a final, bureaucratic, step of reconstructing the multi-sets from the real-number values. To fully automate the process we start with the Hindley-Milner type inference to determine the underlying simple-type structure [19].
Multiset size (SCC) type inference is presented in detail elsewhere [11], but we will quickly review it here. We first interpret schedules as natural numbers, representing their number of stages $J \in \mathbb{N}$. Unknown schedules are variables, schedules with unknown stages but fixed size (such as those for operators) are constants. A type derivation results in a constraint system over $\mathbb{N}$ which can be solved by an SMT tool such as Z3 [20]. More precisely, Z3 can attempt to solve the system, but it can be either unsatisfiable in some cases or unsolvable as nonlinear systems of constraints over $\mathbb{N}$ are generally undecidable.
As a practical observation, solving this constraint using general-purpose tools will give an arbitrary solution, if it exists, whereas a “small” solution is preferable. [11] gives a special-purpose algorithm guaranteed to produce solutions that are in a certain sense minimal. To achieve a small solution when using Z3 we set a global maximum bound which we increment on iterated calls to Z3 until the system is satisfied.
Next we instantiate schedules to their known sizes, and to re-run the inference algorithm, this time in order to compute the stages. This proceeds according to the general type-inference recipe, resulting in a system of constraints over the $\mathbb{N}[\text{Aff}_1]$ semiring, with the particular feature that all the sizes of all the multisets is known. We only need to specify the schedules for the constants:
\[
\begin{align*}
\vdash 1 : \text{exp} & \triangleright \text{true} \\
\vdash \text{skip} : \text{com} & \triangleright \text{true} \\
\vdash \text{op}_{x,y} : [x] \sigma \rightarrow [y] \sigma \rightarrow \sigma \triangleright \{ x \neq I, y \neq I \}
\end{align*}
\]
⊢ \[ \text{if } x, y : [x] \exp \rightarrow [y] \sigma \rightarrow [y] \sigma \rightarrow \sigma \uparrow \{ x < y \} \]
⊢ \text{new}_\sigma, J, w_1, \ldots, w_n : (J \exp \rightarrow \text{acc}_{w_1} \rightarrow \cdots \rightarrow \text{acc}_{w_n} \rightarrow \sigma) \rightarrow \sigma \uparrow \bigwedge_{i=1..n} \{ 0 \neq w_i \}
In the typing for \textit{op} we disallow an instant response and in the typing for \textit{new} we disallow instantaneous write operations.
As mentioned, in the concrete system it is useful to characterize the resource usage of families of constants also by using constraints, which can be combined with the other constraints (of the type system, etc.). The language of constraints itself can be extended arbitrarily, provided that eventually we can represent it into the language of our external SMT solver, Z3. The constraints introduced by the language constants are motivated as follows:
**op:** We prevent the execution of any of the two arguments to take the full interval, because an arithmetic operation cannot be computed instantaneously.
**if:** The execution of the guard must precede that of the branches.
**new:** The write-actions cannot be instantaneous.
This allows us to translate the constraints from the semiring theory into real-number constraints. Solving the system (using Z3) gives precise timing bounds for all types. However, this does not guarantee the fact that computations can be pipelined, it just establishes timings. In order to force a pipeline-compatible timing discipline we need to add extra constraints guaranteeing the fact that each timing annotation \( J \) is in fact a proper pipeline.
Two stages \( x_1, x_2 \in \text{Aff}_1 \) are \textit{FIFO} if they are Egli-Milner-ordered, \( x_1 \leq x_2 \). They are \textit{strictly FIFO}, written \( x_1 \triangleleft x_2 \) if they are FIFO and they do not start or end at the same time, i.e. if \( x_i \cdot [0, 1] = [t_i, t'_i] \) then \( t_0 \neq t'_0 \) and \( t_1 \neq t'_1 \).
**Definition 2.** We say that a schedule \( J \in \mathbb{N}[\text{Aff}_1] \) is a pipeline, written \( \text{Pipe}(J) \), if and only if \( \forall x \in \text{Aff}_1, J(x) \leq 1 \) (i.e. \( J \) is a proper set) and for all \( x, x' \in J \), either \( x \triangleleft x' \) or \( x' \triangleleft x \) or \( x = x' \).
Given a system of constraints \( \chi \) over \( \mathbb{N}[\text{Aff}_1] \), before solving it we augment it with the condition that every schedule is a proper pipeline: for any \( J \) used in \( \chi \), \( \text{Pipe}(J) \). Using the conventional representation (scaling and phase), the usual matrix operations and the pipelining definitions above we can represent \( \chi \) as a system of constraints over \( \mathbb{R} \), and solve it using Z3.
**Implementation note.** For the implementation, we enforce arbitrary orders on the stages of the pipeline and, if that particular order is not satisfiable then a different (arbitrary) order is chosen and the process is repeated. However, spelling out the constraint for the existence of a pipelining order \( \triangleleft \) for any schedule \( J \) would entail a disjunction over all possible such orders, which is \( O(n!) \) in the size of the schedule, for each schedule, therefore not realistic. However, if the systems of constraints have few constants and mostly unknowns, i.e. we are trying to find a schedule rather than accommodate complex known schedules, our experience shows that this pragmatic approach is reasonable.
Example 2. Let us first consider the simple problem of using three parallel adders to compute the sum \( fx + fx + fx + fx \) when we know the timings of \( f \). Suppose \( f : \{((0.5, 0.1); (0.5, 0.2)) \}\exp \rightarrow \exp \), i.e. it is a two-stage pipeline where the execution of the argument takes half the time of the overall execution and have relative delays of 0.1 and 0.2 respectively. We have the choice of using three adders with distinct schedules \(+_i : [x_i] \exp \rightarrow [y_i] \exp \rightarrow \exp (i \in \{1, 2, 3\})\) so that the expression respects the pipelined schedule of execution of \( f \). The way the operators are associated is relevant: \((fx +_2 fx) +_1 (fx +_3 fx)\). Also note that part of the specification of the problem entails that the adders are trivial (single-stage) pipelines. Following the algorithm above, the typing constraints are resolved to the following:
\[
\begin{align*}
+_1 : & [(0.5, 0.25625)] \exp \rightarrow [(0.5, 0.25)] \exp \rightarrow \exp \\
+_2 : & [(0.5, 0.21875)] \exp \rightarrow [(0.5, 0.25)] \exp \rightarrow \exp \\
+_3 : & [(0.5, 0.375)] \exp \rightarrow [(0.5, 0.25)] \exp \rightarrow \exp
\end{align*}
\]
In the implementation, the system of constraints has 142 variables and 357 assertions, and is solved by Z3 in circa 0.1 seconds on a high-end desktop machine.
Example 3. Let us now consider a more complex, higher-order example. Suppose we want to calculate the convolution \((*)\) of a pipelined function \((f : \{((0.5, 0.1); (0.5, 0.2)) \}\exp \rightarrow \exp \) with itself four times. And also suppose that we want to use just two instances of the convolution operator \(*_1, *_2\), so we need to perform contraction on it as well. The type skeleton of the convolution operator is \((*) : (\exp \rightarrow \exp) \rightarrow (\exp \rightarrow \exp) \rightarrow \exp \rightarrow \exp\).
The implementation of \( f \) and \(*\) are unknown, so we want to compute the timings for the term
\[
(*_1) : J^{i_1} = J^{i_1} = J^{i_2} = f : J^{i_1} \cdot (J^{i_1} \exp \rightarrow \exp) \rightarrow J^{i_1} \cdot (J^{i_2} \exp \rightarrow \exp) \rightarrow \exp, \\
(*_2) : J^{i_2} = J^{i_2} = f : J^{i_2} \cdot (J^{i_1} \exp \rightarrow \exp) \rightarrow J^{i_2} \cdot (J^{i_2} \exp \rightarrow \exp) \rightarrow \exp, \\
J : J : \{((0.5, 0.1); (0.5, 0.2)) \}\exp \rightarrow \exp) \rightarrow (f \cdot *_1) \cdot *_2 (f \cdot *_1) : \theta.
\]
The constraint system has 114 variables and 548 assertions and is solved by Z3 in 0.2 seconds on a high-end desktop machine. The results are:
\[
\begin{align*}
J^{i_1} & = J^{i_1} = J^{i_2} = J^{i_2} = [(1.0, 0.0)] \\
J^{i_1} & = J^{i_1} = J^{i_2} = J^{i_2} = J^{i_2} = J^{i_2} = [((0.5, 0.1); (0.5, 0.2)] \\
J^{i_1} & = J^{i_1} = J^{i_1} = J^{i_2} = [(0.5, 0.125); (0.5, 0.25); (0.5, 0.375); (0.5, 0.4375)] \\
J^{i_2} & = [(0.25, 0.25); (0.25, 0.5); (0.25, 0.625)]
\end{align*}
\]
3.3 Absolute timing
This section is a variation of the type system in order to deal with absolute rather than relative timing. The presentation is more informal, but the formalism of the previous sections can be applied here if desired.
In our main intended application, hardware compilation, relative timing rather than absolute timing is relevant. However, for other applications such as real-time computing absolute timing might be required. We can recover absolute timings for a program (closed term) in two steps. What is interesting here is the introduction of yet another level of constraints, this times imposed by the physical characteristics of the computational platform we use. They come in addition to the structural, language and architectural constraints seen so far.
In the first step we propagate the timing annotations all the way down to the constants. The constants of the language are families indexed by schedules, and this propagation will generate the set of all concrete constants used by a program, with timings given relative to the overall execution of the program. The function \( \gamma - \cdot \gamma \) takes as arguments a schedule and a term and produces as set of language constants. It is defined inductively on the type derivation as follows:
\[
\begin{align*}
\Gamma, x : 1 \cdot \theta \vdash x : \theta\neg(J) &= \emptyset \\
\Gamma, x : K \cdot \theta \vdash M : \theta\neg(J) &= \Gamma \vdash M : \theta\neg(J), \quad x \notin fv(M) \\
\Gamma \vdash \lambda x. M : K \cdot \theta \rightarrow \theta\neg(J) &= \Gamma, x : K \cdot \theta \vdash M : \theta\neg(J) \\
\Gamma, K \cdot \Gamma' \vdash FM : \theta\neg(J) &= \Gamma \vdash F : K \cdot \theta \rightarrow \theta\neg(J) \cup \Gamma \Delta \vdash M : \theta\neg(J \times K) \\
\Gamma, x : (K + L) \cdot \theta \vdash M[x/y] : \theta\neg(J) &= \Gamma, x : K \cdot \theta, y : L \cdot \theta \vdash M : \theta\neg(J) \\
\Gamma k : \theta\neg(J) &= \{ k : \theta\neg([x]) \mid x \in J \} \\
\Gamma K \cdot \theta \rightarrow \theta\neg(J) &= K \cdot \theta\neg(J) \rightarrow \theta\neg(J) \\
\Gamma \sigma\neg(J) &= J \cdot \sigma.
\end{align*}
\]
What is the most interesting is the translation of the constants. In the case of our concrete programming language we have, for example:
\[
\begin{align*}
\Gamma \text{op} : [x] \cdot \exp \rightarrow [y] \cdot \exp \rightarrow \exp\neg[u] &= \text{op} : [u \times x] \cdot \exp \rightarrow [u \times y] \cdot \exp \rightarrow [u] \cdot \exp
\end{align*}
\]
and so on. This is a constant which executes in interval \( u \), and its arguments in \( u \times x \) and \( u \times y \), which now represent absolute timings. The reasons that we collect these constants is because depending on the concrete target platform some of them may be impossible to implement from a timing point of view. For the operation above (\( \text{op} \)), if we work out the numbers we get \( t_1 = u_1 x_1 + u_1 x_2 + u_2 \) and \( t_2 = u_1 y_1 + u_1 y_2 + u_2 \) as the respective times when the two arguments terminate, which means that the duration in which \( \text{op} \) must compute the result is before its own termination at \( t = u_1 + u_2 \), i.e. \( \delta t = u_1 - \max(u_1 x_1 + u_1 x_2, u_1 y_1 + u_1 y_2) \).
This \( \delta t \) must be greater than a system-defined constant such as the duration of one clock cycle (e.g. 1 ns).
For any program \( \vdash M : \text{com} \), its set of constants is \( \Gamma \vdash M : \sigma\neg([d]) \) where \( d \) is an affine (not necessarily contractive) transform defining its total duration. The value of \( d \) is not known and must be chosen large enough so that all constants in \( \Gamma M\neg[d] \) are implementable.
Example 4. Consider the term \( 1 + x, y (2 + u, v, 3) : \text{exp} \). It is quite easy to calculate that
\[
\vdash 1 + x, y (2 + u, v, 3) : \exp^\gamma([d]) = \{ + : [d \times x] \cdot \exp \to [d \times y] \cdot \exp \to [d] \cdot \exp, \\
+ : [d \times x \times u] \cdot \exp \to [d \times x \times v] \cdot \exp \to [d \times x] \cdot \exp, \\
1 : [d \times x] \cdot \exp, 1 : [d \times x \times u] \cdot \exp, 1 : [d \times x \times v] \cdot \exp \}
\]
Suppose that all the additions can performed in 1 ns and the constants can be computed instantaneously. These timing constraints are satisfied by \( d = \begin{pmatrix} 2 & 0 \\ 0 & 1 \end{pmatrix} \),
\( y = \begin{pmatrix} 0.5 \\ 0 \\ 1 \end{pmatrix} \), and \( x, u, v = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \).
4 Related work
The BLL type system has been already generalized by Dal Lago and collaborators to Linear Dependent Types (LDT) \cite{4, 6}. This greatly increases the expressiveness of the type system but at the expense of losing decidability. We also generalize BLL but in a different way, by using an abstract notion of resource. It is natural to think of resources as having a monoidal structure, as resources can be aggregated. However, we show that the additional structure of a semiring can be employed in a useful way to scale resources. Our generalization consists of replacing the family of modalities \(!_x A\) of BLL, which are interpreted as \( A\) may be reused less than \( x\) times with a general resource action \( R \cdot A\), which is interpreted as \( A\) may use at most \( R\) resources. This is a generalization because \( R\) can be simply instantiated to \( x\), giving back BLL. For this abstract type system we show how the problem of type inference can be naturally reduced to a system of constraints parametrized by the equational theory of the resource semiring. Provided this theory is decidable, a type inference algorithm automatically follows.
We also provide a categorical framework, for which we prove the key result of coherence. This is the main technical contribution of the paper. Coherence is an essential technical property because denotational interpretations are given inductively on type derivations, which are generally not unique. This means that in the absence of coherence a denotational interpretation cannot make sense. Coherence for a categorical semantics is also the generalization of the subject reduction property used by operational semantics, as substitution is usually interpreted by composition in the category. Resource-awareness has been usually modeled operationally, but game-semantic \cite{7} and, more recently, relational models \cite{16} have been proposed to model resources denotationally.
The same typing framework presented here was developed independently in \cite{2} (published in this volume), but includes resource actions in covariant positions so it could be used to model call-by-value languages. For this larger type system the soundness of the system is proved relative to an operational semantics with so-called coeffect actions.
The second part of the paper presents a non-trivial motivating application to timing analysis and automated pipelining of computations in a recursion-free functional programming language with local store, and is meant to illustrate several points. The first one is showing a complex notion of resource in action. The second one is presenting a non-trivial multi-stage type inference algorithm for this resource. The third one is to show a specialization of the type inference algorithm in the case of a concrete programming language when language constants and arbitrary system-level resources can come into play.
5 Conclusion
We have presented an abstract framework for BLL using a more general notion of resource which can be modeled in a semiring, gave a categorical model and a proof of coherence. We gave several instances of this general typing framework, depending on several notions of resource, one of which is a fairly elaborate method for tracking execution time in a higher-order setting. We have not given concrete semantics here, but denotational (game) models of various programming languages that fit this framework have been developed elsewhere [10, 12, 24].
One methodological feature which seems quite unique for this typing framework, and is amply illustrated in the previous section, is its degree of flexibility and modularity. In addition to the structural constraints imposed by the type system we can freely add language-level constraints (e.g. “the if statement is sequential”), architectural constraints (e.g. “schedules must be pipelines”) and physical constraints (e.g. “addition can be performed no faster than 1 ns”). Various passes of the type-inference algorithm collect constraints which, ultimately, are about what language constants are implementable or not within certain resource constraints on a particular physical platform. The modularity of the system is expressed in a different dimension as well. Since the Cartesian product of semirings is a semiring we can easily combine unrelated notions of constraints, which is essential in managing the trade-offs that need to be made in a realistic system.
Acknowledgment. Sec. 2.4 benefited significantly from discussions with Steve Vickers. Olle Fredriksson and Fredrik Nordvall Forsberg provided useful comments. The authors express gratitude for their contribution.
References
|
{"Source-Url": "https://research.birmingham.ac.uk/portal/files/23697858/esop14.pdf", "len_cl100k_base": 15943, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 85541, "total-output-tokens": 18704, "length": "2e13", "weborganizer": {"__label__adult": 0.0003528594970703125, "__label__art_design": 0.0003688335418701172, "__label__crime_law": 0.00025725364685058594, "__label__education_jobs": 0.00058746337890625, "__label__entertainment": 7.772445678710938e-05, "__label__fashion_beauty": 0.0001577138900756836, "__label__finance_business": 0.00023257732391357425, "__label__food_dining": 0.0004055500030517578, "__label__games": 0.0006375312805175781, "__label__hardware": 0.0012445449829101562, "__label__health": 0.0004835128784179687, "__label__history": 0.0002930164337158203, "__label__home_hobbies": 0.00010752677917480467, "__label__industrial": 0.0005230903625488281, "__label__literature": 0.0003788471221923828, "__label__politics": 0.0003142356872558594, "__label__religion": 0.00063323974609375, "__label__science_tech": 0.033782958984375, "__label__social_life": 7.992982864379883e-05, "__label__software": 0.004764556884765625, "__label__software_dev": 0.953125, "__label__sports_fitness": 0.0002760887145996094, "__label__transportation": 0.0006818771362304688, "__label__travel": 0.00021445751190185547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61380, 0.01882]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61380, 0.4001]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61380, 0.81661]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2593, false], [2593, 5955, null], [5955, 8533, null], [8533, 11764, null], [11764, 15041, null], [15041, 17710, null], [17710, 20761, null], [20761, 24026, null], [24026, 25885, null], [25885, 29060, null], [29060, 32302, null], [32302, 35333, null], [35333, 38608, null], [38608, 42154, null], [42154, 45681, null], [45681, 48835, null], [48835, 52313, null], [52313, 55407, null], [55407, 58204, null], [58204, 61380, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2593, true], [2593, 5955, null], [5955, 8533, null], [8533, 11764, null], [11764, 15041, null], [15041, 17710, null], [17710, 20761, null], [20761, 24026, null], [24026, 25885, null], [25885, 29060, null], [29060, 32302, null], [32302, 35333, null], [35333, 38608, null], [38608, 42154, null], [42154, 45681, null], [45681, 48835, null], [48835, 52313, null], [52313, 55407, null], [55407, 58204, null], [58204, 61380, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61380, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61380, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61380, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61380, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61380, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61380, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61380, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61380, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61380, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61380, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2593, 2], [2593, 5955, 3], [5955, 8533, 4], [8533, 11764, 5], [11764, 15041, 6], [15041, 17710, 7], [17710, 20761, 8], [20761, 24026, 9], [24026, 25885, 10], [25885, 29060, 11], [29060, 32302, 12], [32302, 35333, 13], [35333, 38608, 14], [38608, 42154, 15], [42154, 45681, 16], [45681, 48835, 17], [48835, 52313, 18], [52313, 55407, 19], [55407, 58204, 20], [58204, 61380, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61380, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
2faa98192ee14590b17c41f1a4ffc7bdeee12d25
|
MODEL DRIVEN DEVELOPMENT OF GAMIFIED APPLICATIONS
Piero Fraternali, Sergio Luis Herrera Gonzalez
Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan, 20133, Italy
piero.fraternali@polimi.it, sergioluis.herrera@polimi.it
Abstract
Gamification is defined as the injection of game elements in applications with non-gaming purposes. This technique has shown outstanding results in promoting the engagement and activity on communities of users, in both business and non-for-profit fields. Often, gamification features are added late in the application life-cycle and must be weaved into the existing functions. In this paper, we present a model-driven approach to the design of gamified applications, which accelerates the introduction of gamification elements in pre-existing or new applications. The approach relies on a data model of gamification features and on design patterns for the front-end, which encode the essential elements of gamification in a platform independent way.
Keywords: Model Driven Engineering, gamification, rapid prototyping, code generation, IFML.
1 Introduction
Gamification is defined as the injection of game elements in non-gaming applications [9]. Its main purpose is to boost the capacity of an application to engage users, improve their proficiency and satisfaction, and retain them. Gamification techniques have been applied in a variety of domains, both commercial and non-for-profit: customer relationship management [18], travel [33], education [25], fitness [38], and environmental awareness [16]. In
its simplest form, gamification entails monitoring the activity of the user and her progress towards goals and providing rewards to her achievements [9]. All the four ingredients of gamification present themselves in an ample variety of forms: actions can be either endogenous to the application (e.g., accessing the application, creating content, executing tasks, etc) or external to it (e.g., specific behaviors detected through sensing and activity recognition, such as running, consuming less energy or water, visiting designated places, etc). They can be individual or collective, if team formation is exploited. Goals can be either set by the application (e.g., reaching a predefined level of expertise or of activity) or self-determined by the user (e.g., running a given amount of kilometers per month or saving 10% in energy or water consumption w.r.t. the preceding month). Achievements can be established by the application (e.g., by monitoring some progress indicators) or by other users (e.g., by collecting votes cast by the community or by expert users). Rewards can be intangible (e.g., promotion to a higher status in the application, assignment of points and badges) or real (e.g., goods, services, or price discounts). The achievement status of the user can be kept private or exposed, e.g., using public leaderboards. The above mentioned gamification ingredients are intertwined with the application functions and can evolve over time; the designer may start with a simple gamification scheme and add more advanced features progressively, or she may modify the gamification rules to steer the users’ behavior towards a desired objective. Such a dynamic nature of gamification requires an agile development methodology, supporting the rapid prototyping of features and their adaptation over time. Agility can be attained by exploiting a pattern-based model-driven development life-cycle:
- **Design patterns** are defined as partial solutions to recurrent design problems [28]. They are particularly useful when some application features repeat, albeit with variations, across multiple domains. Their use can speed up development and also improve quality, because patterns embody design knowledge distilled in many solutions.
- **Model-Driven Engineering** advocates the use of models as the central artifacts of application development, from which the implementation can be derived [32]. Models are abstract representations of the application features, independent of technological details; as such, they pair well to the notion of patterns, because they allow the expression of the design knowledge embedded in patterns in a way that does not depend on the technical space in which a specific application instance is built.
In this paper, we define the model-driven patterns that embody the design knowledge necessary to develop a gamified application or to add gamification features to an existing system. The contribution of the paper can be summarized as follows:
- We recall the essential concepts of application gamification and define a reference architecture for gamified solutions. We formalize the gamification concepts and their relationships by means of a Gamification Domain Model.
- We identify a set of design patterns that embody the essential elements of application gamification; we represent such patterns using the Interaction Flow Modeling Language (IFML) [3] for the front end part, and UML sequence diagrams [31] for the back-end business logic.
- We showcase how the proposed model-driven pattern-based methodology has been applied to the development of two real world applications in the area of environmental awareness.
The rest of the paper is organized as follow: Section 2 surveys the related work in the areas of model-driven and pattern-based development, gamification and environmental awareness applications. Section 3 briefly describes the IFML modeling language. Section 4 defines the essential concepts of gamified solutions and provides a reference architecture for their development and execution. Section 5 presents the Domain Model supporting the design of gamification patterns and Section 6 presents the game rule engine and the exposed service API. Section 7 illustrates the patterns for the front-end of gamified applications, expressed in IFML. Section 8 discusses the use of the proposed model-driven pattern-based methodology in the development and evolution of two applications. Finally Section 9 provides the conclusions and highlights the envisioned future work.
2 Related work
**Pattern-based Model-Driven Engineering.** Model-driven engineering (MDE) is the systematic use of models as primary artifacts throughout the engineering life-cycle and is at the core of industrial tools such as Business Process Management Systems (BPMS) and Rapid Application Development (RAD) Platforms. Many sectors of the software development industry have adopted MDE, for example, to design SOA architectures [35], to secure SOA
data flows [17], to manage IoT infrastructures [6], and even to deploy machine learning processes on cloud infrastructures, e.g., with such tools as Azure Machine Learning Studio\(^1\). MDE and patterns match well: patterns capture design knowledge and models enable the reuse of such knowledge in a platform-independent way. Several Web MDE frameworks integrate patterns as building blocks for the automatic generation of user interfaces, as demonstrated e.g., in [29] and [10]. Koch et al. extended UML-based Web Engineering (UWE) to integrate patterns for Rich Internet Applications (RIAs), such as “auto-completion” or “periodic refresh” [20]; Fraternali et al. extended a web engineering methodology to represent the features of RIAs by allocating specific functionality in the appropriate tier and specifying suitable design patterns for dealing with the interaction between the tiers [12].
A pattern-based model-driven approach for safety-critical systems was proposed in [14], to model dependability patterns and enable their reuse across domains. Zdun et al. created an intermediate abstraction level consisting of pattern primitives, which can be used as building blocks for actual patterns applicable to the model-driven development of SOA processes [37].
**Model-driven approaches for games and gamification.** In this field, Herzing proposed the Gamification Modelling Language (GaML) [23], [15], a modelling language for game design formalized as a Xtext grammar\(^2\); the proposed approach transforms the textual definition of the game rules into JSON and Drools Rule Language (DRL) files, interpreted by software components based on the Unity\(^3\) achievement system plugin and Drools Business Rule Engine\(^4\) (BRE). Calderon et al. [4] proposed a graphical modelling language for gamification, supported by the Eclipse Modelling Framework\(^5\) (EMF): the gamification domain, the rules and the interactions with non-gamified components can be defined graphically and the resulting model can be transformed into code for the Complex Event Engine (CEP) and for an Enterprise Service Bus (ESB). In both the above mentioned approaches a change of the gamification policies can only be accomplished in one of two ways: 1) by modifying the generated code, which requires expert programming skills to operate on generated code, which is not always human-readable; 2) by updating the model and regenerating the gamification artifacts, which requires the redeployment of the gamified components and makes evolution
---
\(^2\) [https://www.eclipse.org/Xtext/](https://www.eclipse.org/Xtext/)
\(^3\) [https://unity.com/](https://unity.com/)
\(^4\) [https://www.drools.org/](https://www.drools.org/)
\(^5\) [https://www.eclipse.org/modeling/emf/](https://www.eclipse.org/modeling/emf/)
time consuming. The approach proposed in this paper factors out the gamification control rules from the gamification front-end patterns, which allows developers to change the former independently of the rest of the application.
**Serious games and gamified environmental applications.** Serious games and gamification have been applied in the environmental field because their motivational power is a desirable characteristic for engaging people in such areas as environmental education, consumption awareness and efficiency behaviours [24]. Ecogator [27] is an efficiency advisor mobile app that scans the energy labels of an appliance and provides hints about its efficiency, such as the annual running cost and the total cost over the product lifetime; it also compares two products to support the user in the decision-making and delivers energy efficiency tips on a daily basis. Drop! The question [11] is a card game with a digital extension for educating players about water saving; the game exploits a “push your luck” mechanics, in which the player repeatedly draws cards, with an increasing risk of losing; the cards are illustrated with water efficient and inefficient behaviours; when the player draws a “bad” card, she must scan a QR code on it with the mobile app and answer a water-related question. Other examples of applications providing environmental education are described in [22], [19] and [1]. Social Power [8] aims at raising energy consumption awareness for users in households and shared spaces, such as schools and libraries. It exploits social interactions and game mechanics to drive people towards more sustainable energy consumption; upon registration users are assigned to a team and receive individual and collective saving goals. The app monitors consumption, by connecting to smart meters, and assigns points to individuals and teams when they save. The SmartH2O project [30] focuses on water saving; its gamified web portal and mobile app connect to water smart meters and enable users to monitor their consumption in quasi-real-time and to pursue weekly water saving goals. enCOMPASS [13] is a project aimed at increasing awareness about efficient energy consumption; it uses smart meters and sensors to collect energy consumption, indoor climate and user activity information to provide personalized recommendations for energy saving based on the user profile, habits and preferred comfort level. The project exploits a gamified app and a hybrid (card and digital) game, to achieve the desired impact on the consumers. Some applications exploit virtual environments to teach efficient behaviours to users. An example is Water Mansion [36], a serious game in which users must execute daily tasks, such as showering or washing dishes; each action increases water consumption and reduces the “gold” that the user owns; the objective is to learn about water efficient consumption and its eco-
nomic impact. A similar approach is applied in EnerGAware [5], a mobile simulation game, in which the objective is to reduce the energy consumption of a virtual house with respect to the previous week. The player can execute actions, such as changing the location of the lamps in a room or turning off appliances (lights, TV, etc.). At the end of every week, the players receive points based on the energy saved. The SmartH2O and enCOMPASS projects, which have been developed with the methodology proposed in this paper, will be described in more detail in section 8.
3 Background: IFML in a nutshell
The Interaction Flow Modeling Language (IFML) [26] is an OMG standard for the platform-independent description of the front-ends of interactive applications. With IFML developers specify the organization of the interface, the content to be displayed, and the effect on the interface produced by the user interaction or by system events. The business logic of the actions activated by the user interaction can be modeled with any behavioral language, e.g., with UML sequence diagrams. Figure 1 shows the essential elements of the IFML metamodel.

Interface Structure. The core IFML element for describing the front-end structure is the ViewElement, specialized into ViewContainer and ViewComponent. ViewContainers denote the modules that comprise the interface content; a ViewContainer can be internally structured in a hierarchy of sub-containers, e.g., to model a main interface window that contains several frames, which in turn contain nested panes, and so on. A ViewContainer can include ViewComponents, which represent the actual content of the interface. The generic ViewComponent specializes into different elements, such as lists, object details, data entry forms, and more. Figure 2 shows the notation: the Search ViewContainer comprises a MessageKeywordSearch Form ViewComponent, which represents a data entry form; MailBox includes a MessageList List ViewComponent, which denotes a list of items; finally, MessageViewer comprises a MessageContent Details ViewComponent, which displays one object. ViewComponents can have input and output parameters: a Details ViewComponent may have an input parameter that identifies the object to display, a form has output parameters corresponding to the submitted values, and a List ViewComponent has an output parameter that identifies the selected item.
Events, Navigation and Data Flows. ViewElements (ViewContainers and ViewComponents) can be associated with Events, to express that they support the user interaction. For example, a List ViewComponent can be associated with an Event for selecting one or more items (as in Figure 3), and a Form ViewComponent with an Event for input submission. The effect of an Event is represented by a NavigationFlow, denoted by an arrow, which connects the Event to the ViewElement affected by it (as shown in Figure 3). When an event occurs, the target ViewElement of the NavigationFlow associated with it gets in view and the source ViewElement may stay in view or switch out of view, depending on the structure of the interface. In Figure 3a, the NavigationFlow associated with the SelectMessage Event
connects its source (MessageList, which displays a list of objects), and its target (MessageContent, which displays the data of an object). When the Event occurs, the content of the target ViewComponent is computed so to display the chosen object, and the source remains in view since it is in the same ViewContainer. In Figure 3b the source and target ViewComponents are in distinct ViewContainers (MailBox and Message); the SelectMessage Event causes the display of the Message ViewContainer, with its content, and the replacement of the MailBox ViewContainer, which exits from view.
IFML can also show the objects from which ViewComponents derive content, their inputs and outputs, and the parameter passing from the source to the target of the NavigationFlow.
In Figure 4 the ViewComponents comprise a DataBinding element that identifies the data source, which can be an object class defined in the Domain Model of the application. Both the ViewComponents in Figure 4 derive their content from the MailMessage entity. MessageContent also comprises a ConditionalExpression i.e., a filter used to extract the content to publish; such ConditionalExpression is parametric: it extracts the object whose MessageID attribute value equals the Msg_ID parameter supplied by SelectMessage.
Event. The parameter passing rule is represented with a ParameterBindingGroup element associated with the Navigation Flow, which couples an output parameter of the source ViewComponent to an input parameter of the target ViewComponent. NavigationFlows enable the expression of the effect of an Event and the specification of parameter passing rules. Yet, a parameter passing rule can be expressed independently of an interaction Event, using DataFlows. Figure 5 shows the DataFlow construct, representing an input-output dependency between a source and a target ViewElement, denoted as a dashed arrow. MailViewer includes three ViewComponents: the MailMessages List is defined on the MailMessage entity, and shows a list of messages; the MessageContent Details is also defined on the MailMessage entity and displays the data of a message; the Attachments List is defined on the Attachment entity and shows a list of mail attachments. The identifier of the selected message is passed from MailMessages to MessageContent, which has a parametric ConditionalExpression to extract the message with the identifier provided in input. Also Attachments has a parametric ConditionalExpression, to select the attachments of the mail message provided in input. When the ViewContainer is accessed, the list of messages is displayed, which requires no input parameters. The DataFlow between MailMessages and MessageContent expresses a parameter passing rule between its source and target: even if the user does not trigger the Select Event, an object is randomly chosen from those displayed in the MailMessages List and supplied as input to MessageContent, which displays its data. Similarly, the DataFlow between the MessageContent and Attachments specifies an automatic parameter passing rule for the list of attachments. By triggering the Select event,
the user can choose a specific message from the list and display its content and attachments.
**Actions.** An Event can also cause the triggering of business logic, executed prior to updating the state of the user interface; the IFML Action construct, represented by an hexagon symbol as shown in Figure 6, denotes the invoked program, which is treated as a black box, possibly exposing input and output parameters. The effect of an Event firing an Action and the possible parameter passing rules are represented by a NavigationFlow connecting the Event to the Action and possibly by DataFlows incoming to the Action from ViewElements of the interface. The execution of the Action may cause a change in the state of the interface and the production of input parameters for some ViewElements; this is denoted by termination events associated with the Action, connected by NavigationFlows to the ViewElements affected by the Action. Figure 6 shows an example of Action, for the creation of a new object. ProductCreation includes a Form with SimpleField sub-elements for specifying the data entry of a new product. The CreateNewProduct Event triggers the submission of the input and the execution of the CreateProduct Action. A ParameterBindingGroup is associated with the NavigationFlow from the CreateNewProduct Event, to express the parameter binding between the Form and the Action. The Action has two termination Events: normal termination lead to the visualization of the NewProductData ViewComponent within the NewProductDisplay ViewContainer; upon abnormal termination, an Event and NavigationFlow specify that an error message ViewComponent is displayed in a different ViewContainer.
4 Gamification Concepts and Architecture
Application gamification aims at engaging users by fostering their involvement and by enhancing their motivations to perform well in the accomplishment of a task [7]. Gamified platforms expose rules that guide the users through a progression of tasks and direct them towards the accomplishment of the defined objectives, while providing feedback and keeping them interested with elements that promote competition, collaboration and self-improvement. For example, fitness applications, such as Runtastic\(^6\) and Nike Run Club\(^7\), showcase the above mentioned gamification design principles: they motivate the users to achieve an objective, provide activity statistics, assign challenges (personal and collaborative) adequate to the user’s level, award achievements for accomplished goals, promote competition through periodic leader boards and offer nutritional information to guide users. Before discussing the technical components of a gamified application, we introduce the concepts that characterize the design of gamification in a platform-independent and cross-domain way.
- **Action**: is an activity that the user can perform. Actions can be done within the platform (e.g., accessing the application, watching content, providing information or feedback, etc), or outside it; external actions
\(^6\) [https://www.runtastic.com](https://www.runtastic.com)
are typically measured through sensors or activity recognition (e.g., running an amount of kilometres, consuming less energy, etc).
- **Goal**: is a user-defined or platform-defined measurable objective that involves performing a series of actions; goals are characterized by a target value to reach (e.g., a minimum rating of produced content, an amount of kilometers, a percentage reduction of water consumption, etc) and optionally by a target deadline when the objective will be verified.
- **Points**: are the “unit of merit” used to reward actions and to recognize the accomplishment of goals; they are sometimes called “credits”, especially when they can be redeemed or exchanged for goods within or outside the platform.
- **Achievement**: is a recognition, typically a badge, assigned to the user when a certain level of progression in a specific area is reached. Achievements should be visible to other users in the community for social recognition and to promote competition.
- **Reward**: is a digital or real world item that users can claim when a pre-defined condition is fulfilled; it may require the user to exchange points for the reward, or it can be assigned when an achievement is attained without further requirements.
- **Leader board**: is a list of the players ordered by a merit criterion, such as collected points or completed activities. It may be computed immediately or periodically (weekly, monthly, etc). More than one leader board can be used: for example, a long-term leader board helps engaging expert users, whereas a short-term (e.g., weekly) one fosters the engagement of novices, who see their initial achievements publicly recognized.
- **Notifications**: are messages about important events or states, delivered periodically (e.g., at the end of established periods) or upon the occurrence of a condition (e.g., the attainment of an achievement). They help preserve motivation and direct the user’s attention to topics or tasks of interest. Notifications can be delivered inside the application, or outside it, e.g. by email.
- **Thematic Areas**: are categories in which actions, goals, achievements, and rewards can be grouped. They are used when engagement relies on
a plurality of stimuli: for example, in a collaborative learning platform, thematic areas could span personal learning, collaboration with other users, reputation improvement, etc.
4.1 A Gamification Architecture
An architecture for gamified applications should minimize the interference of the gamification rules with the business logic of the application and limit the integration effort. Figure 7 presents the high-level architecture experimented in multiple gamification projects, described in Section 8, which proved effective in integrating gamification into web and mobile applications. Its modules embed the essential functions of a gamified application.

The **Core Business Logic** module implements the actions to be performed by the user. Given the cross-domain nature of gamification, this module is represented as a generic software component, which delivers its services possibly relying on an application database storing the state of the non-gamified portion of the application. An important responsibility of this layer is to notify the Gamification Engine about the execution of actions by the user; the activities that are relevant for gamification should be registered in the Gamification Engine and their implementation should integrate the dispatching of relevant events to the Gamification Engine.
The **Gamification Engine** implements the registration of gamified actions, the establishment of goals, the assignment of points based on the
executed actions, the detection of achievements, and the delivery of rewards. The GE provides a service API for the Core business logic component to dispatch action events; it also exposes a query API for the gamification patterns embedded in the Gamified Application to retrieve information about the progress of users in the gamification exercise. Configurability is attained by factoring out the parameters that control gamification into a declarative specification, stored in the Gamification Database.
The Gamification Database stores the entities that allow the GE to execute the gamification rules and enable the Gamified Application to publish the user progress and state; it contains configuration parameters that enable the declarative specification of the gamification logic and facilitate its evolution. The domain model of the Gamification Database is discussed in Section 5.
The Notification Engine implements the logic for delivering the notifications that provide feedback to the users and remind them of their goals. Notifications can be configured to be triggered when the user perform actions, reach goals, attain achievements or unlock rewards. Notification delivery is controlled by parameters in the Gamification Database, to facilitate change. Configuration data also comprise the delivery channels and message templates; examples of delivery channels comprise email messages or messages of such systems as Google Firebase Messaging Framework.
The Deadline Manager is a chron process, which monitors the expiry of deadlines, configured in the Gamification Database, and calls the GE to enact the necessary procedures to check the status of the gamification and possibly notify users of relevant events.
The Gamified Application is the topmost layer of the architecture, which integrates the gamified view components into the business views. It exploits the Gamification Database, the gamification patterns, and the GE back-end services, to present game-related information contextualized in the business views and to capture and dispatch the user’s action events.
The architecture of Figure 7 circumscribes the effort to integrate the business and the gamification logic within two components: 1) the Core Business Logic must be extended in such a way that the execution of business actions notifies the GE; no other modifications to the native business logic of the application are required; 2) the Gamified Application must complement the business user interface with the views and view components for disclosing the state of the gamification and engage the user in performing the gamified actions. It does so by incorporating the patterns illustrated in Section 7.
---
8 https://firebase.google.com/docs/cloud-messaging/
The extension of the Core Business Logic to support the dialogue with the GE can be implemented with the help of the Observer pattern [28], whereby the GE acts as an Observer notified about the occurrence of events by the business logic, which acts as the Subject. The Observer-Subject relationship can be realized tightly, by extending the implementation of the business actions with explicit notification calls to the GE, or loosely, by having the GE poll the business logic component for status changes.
5 Gamification Domain Model
The Gamification Domain Model, shown in Figure 8, describes the entities and relationships used by the Gamification Engine to regulate the gamification actions, track user activity and progress, and assign achievements to the performing users. The provision of a Domain Model, independent of the code of the Gamification Engine and of the Gamified Application, facilitates the configuration of gamification and its dynamic evolution during the application maintenance, to adapt the engagement strategy to the evolution of the users’ response to the gamification stimuli.
The main entities of the Gamification Domain Model are:
- **User**: is used for identification and profiling, with the usual information about the username, password, email, etc. If this information is already present in the application business database, the GE contains a replica or a view of the original data.
- **Gamification User**: specializes the User entity with attributes pertinent to gamification (total points, available credits, etc.).
- **Group**: is used to cluster users with different characteristics; it helps tailor the gamification stimuli to the specific needs of a user group and to compare users with similar characteristics, e.g., in the leader boards.
- **Thematic Area**: organizes actions and achievements that pertain to the same topic of interest, to focus the attention of the user and provide structure to her participation.
- **Action Type**: expresses a class of actions that can be performed, the configuration parameters that control the evaluation of such actions and the assignment of the points associated to them. The configuration parameters include the number of points awarded, the repeatability of the
Figure 8: Gamification Domain Model
action, the minimum time interval between repeated executions, and the associated rewards. Action types can be associated to thematic areas.
- **Action**: denotes the actual instances of an action type performed by users.
- **Goal Type**: represents a category of goals. A goal type is associated with an indicator, which measures quantitatively the attainment of the goal. A goal type may be periodic or absolute: a periodic goal is checked at recurrent deadlines (e.g., every week or month), whereas an absolute goal is verified at a specific deadline (e.g., the end of the gamification exercise). A goal type is also associated with an action type, that denotes the action that must be fired to signal that the goal has been met and to update the gamification status of the user accordingly.
- **Goal**: represents the actual goals associated to the user; a goal is characterized by the user it belongs to, by a target value and by a status. The status can be in progress, achieved or missed. An achieved goal is associated to the action that has been created to trigger the assignment of points corresponding to its accomplishment.
- **Badge Type**: denotes a class of achievements, represented by badges that show the progression of the user in a thematic area. A badge type is characterized by a title, a description, a level, the required amount of points to attain it, the thematic area it belongs to, and an image that represent it visually.
- **Badge**: represents the actual badges acquired by the users.
- **Reward Type**: expresses a category of rewards, i.e., of prizes that a user can claim once she has performed the required actions or reached the required amount of points. Reward types have a title, a description, the required number of points, and an image.
- **Reward**: denotes the instances of the reward acquired by the users, characterized by a redemption date and by a confirmation code.
- **Deadline**: denotes a time point at which the status of the gamification should be checked; deadlines can be *periodic*, e.g., daily or monthly, or *absolute*, e.g., a fixed termination date of a gamification exercise. Deadlines can be *hard* or *soft*: hard deadlines are associated to goals, to
force their evaluation at specified times. Soft deadlines serve the purpose of checking the user’s progress, with the aim of notifying the user and stimulating the attainment of goals.
- **Notification Type**: expresses a category of notifications, which can be sent to alert users about events and provide feedback. Notification types have a title, a description, an icon, and a delivery channel.
- **Notification**: denotes the actual notifications sent to the user.
### 6 Gamification Engine
The GE is the component that provides gamification services to applications. It exploits the Gamification Domain Model to manage user’s activity, assigned points, verified goals, user’s achievements, notifications, and rewards. The data that control the GE are edited with the GE Configurator, whereby the manager of the platform can create, modify, and remove gamification elements. The GE interacts with the **Notification Engine (NE)**, which delivers notifications. It can be seen as a process that handles two types of events: the posting of a user’s action from the Gamified Application and the expiry of a gamification deadline, signalled by the Deadline Manager module.
#### 6.1 Gamification Services
The GE exposes an API to process the user’s actions and the expiry of deadlines, which we describe with UML sequence diagrams.
**ProcessUserAction**: this service takes in input the ID of a user, the ID of an action type, and the time stamp of the action occurrence; it checks the validity of the user’s action, grants points, and verifies achievements and rewards. The process steps are as follows: 1. **ValidateUserAndAction**: if the action type and user are valid and active, then the action is assigned to the user; otherwise, it is ignored and the process ends. 2. **ValidateExecutability**: if the action type is non-repeatable and this is the first action of the type performed by the user, or if the action type is repeatable and the elapsed time since the last occurrence is larger than the interval configured for the action type, then the action is assigned to the user; otherwise, it is ignored and the process ends. 3. **GetPointsAndCheckAchievements**: the total number of points corresponding to the thematic areas of the action type is computed; if the required amount of points for an achievement in that area is reached, a badge is assigned to
the user and a corresponding notification request is sent to the Notification Engine. 4. **CheckRewards**: if the user has reached the necessary points, the reward is made claimable for the user and a signal is sent to the Notification Engine. 5. **UpdateUserPoints**: points are granted and leader boards updated.
The **ProcessUserAction** service is invoked by the Gamified Application, after a gamified action, and by the **CheckUserGoals** service, when the user has met a goal at a hard deadline. Figure 9 shows the sequence diagram of the **ProcessUserAction** service.
**CheckUserGoals**: takes in input a user ID and checks if the user has reached the goals associated with her. A goal is reached if a performance indicator, managed by the core business logic of the gamified application, is greater or equal to its target value. For example, in a technical support system the goal target value could represent the minimum number of “likes” received by the user’s posts during the period. Figure 10 illustrates the processing steps of the service: 1. **RetrieveActiveGoals**: fetches the goals with “in progress” status and expired deadline. 2. **GetIndicatorValue**: The service queries the Core Business Logic component to retrieve the current value of the indicator needed to evaluate the goal. 3. **CheckGoalAccomplishment**: If the indicator value is greater or equal to the target value, the goal status is updated to “achieved”, the **ProcessUserAction** service is called to award the corresponding points, and a goal accomplishment signal is sent to the Notification Engine. Otherwise, the goal status is updated to “missed” and a missed goal signal is sent to the Notification Engine.
The **CheckUserGoals** service is invoked by the DeadlineManager component when a hard deadline expires.
**VerifyGoalProgress**: this service is similar to **CheckUserGoal** but computes the percentage of the indicator value still missing to reach the goal, instead of verifying the goal completion. The **VerifyGoalProgress** service is invoked by the DeadlineManager component when a soft deadline expires.
Note that goal checking is executed asynchronously w.r.t. to the user’s actions by the two services **CheckUserGoals** and **VerifyGoalProgress**. Both services are triggered by deadlines. The synchronous checking could be accomplished by extending the **ProcessUserActions** service with the logic to check the goal attainment, as done for the achievements.
**RedeemRewards**: this service takes in input a user ID and a reward ID and supports the redemption of rewards by the user. If the operation completes
Figure 9: Sequence diagram of the ProcessUserAction service.
Figure 10: Sequence diagram of the CheckUserGoals service.
.successfully, the user receives a confirmation code enabling the claim and the platform manager is notified about the event, so that he can handle the delivery process. Figure 11 shows the steps of the service.
1. **CheckCredit**: the user’s credits and the availability of the reward are checked to confirm that the user can claim the item. If such requirements are not met, the service returns a failure message to the invoking Gamified Application. 2. **AssignReward**: if the requirements are met, the reward is assigned to the user, the credits are updated, a notification request including a reclamation code is sent to the NE.
The *RedeemRewards* service is invoked by the Gamified Application, whose interface allows the user to start the redemption process. Besides the above mentioned services, the GE also exposed CRUD operations for the management of the content of the Gamification Database (e.g., the creation of self-assigned goals by the users).
### 7 Gamification front-end patterns
Gamification impacts not only the back-end of a solution, but also the front-end, which must support the execution of the gamified actions and the display of the gamification status. Across the different domains in which gamification techniques can be applied, it is possible to recognize recurrent functions. In the spirit of MDE, such features can be captured as patterns, expressed by models that can be transformed into actual application components through model-to-text transformations. This section introduces the patterns that ex-
press the most common features of gamified applications, represented as IFML models following the notation explained in section 3.
7.1 Gamified Login and Home Page
The Gamified Login and Home Page pattern (shown in Figure 12) extends the well-known login functionality to award points when the user accesses the application, with the objective of encouraging continuous usage. The pattern consists of a ViewContainer (Login) comprising a form for inputting the user’s credentials. The Submit event associated to the form triggers the ValidateUserCredential action, which checks the credentials provided by the user; in case of failure (event AuthenticationFailed), the Login ViewContainer is re-displayed, and shows the error message output by the ValidateUserCredential operation; in case of success, (event AuthenticationSuccess), the ProcessUserAction GE service is invoked, passing in input the current time stamp, the ID of the user, and the ID of the gamified action associated to the user’s log in. Upon successful completion of the ProcessUserAction, a Home ViewContainer is displayed. The Home ViewContainer should include a reminder of the activities that the logged-in user can perform; these may be a mix of non gamified tasks and of gamified activities; to express this pattern in a general form, the model of the Home ViewContainer comprises two List ViewComponents, one for the non gamified and one for the gamified activities. In both ViewComponents an IFML ConditionalExpression (i.e., a predicate) is used to filter the operations and actions pertinent to the logged-in user.
A variant of the basic Gamified Login and Home Page pattern is obtained by extending the model of Figure 12 as follows: an event can be used to capture the failure of the ProcessUserAction operation and transfer the user to a gamification-specific error page, which contains a message warning the user of the reason why his action failed. This extension allows, for example, to manage malicious users that perform too frequent log in and log out operations with the intention of earning points and spamming the system. By setting a proper minimum interval for the log in gamified action it is possible to avert such an undesired behavior and warn the user of its consequences.
7.2 Gamified Action
The Gamified Action pattern, shown in Figure 13, demonstrates a generic way to gamify a task performed within the application. The pattern comprises a
Figure 12: IFML pattern for gamified login and home page.
BusinessTask ViewContainer, which shows a list of available activities related to a gamified action. The Select event of the PendingBusinessTasks ViewComponent lets the user select the task to work on. Such a choice causes the display of a ViewContainer (generically named CompleteBusinessTask in Figure 13), whereby the user can perform the activity (e.g., by inputting data into a form). When the user finishes, she submits the task data to a core business action (generically named ExecuteTask in Figure 13) for validation and storage. If the business action completes successfully, then the GE PerformUserAction service is called to assign the action to the user. After the successful execution of the GE service, the TaskCompleted ViewContainer is displayed, which presents the details of the performed task. The TaskCompleted ViewContainer also includes a ProgressInArea List ViewComponent, which shows the badges for the thematic area related to the gamified action and provides immediate feedback about the user progress. Note that gamification “surrounds” the interface of the business activity: the CompleteBusinessTask ViewContainer does not contain gamification elements, to avoid distracting the user during the execution of the task.
The basic pattern can be extended by including a ViewComponent presenting the gamification status of the user in the thematic area of the gamified task also in the BusinessTask ViewContainer, to anticipate to the user the impact of executing an activity on her status.
7.3 Goal Selection and Progress
The Goal Selection and Progress pattern, shown in Figure 14, provides concise feedback about the user progress towards her goals, shows the status of the goals already established, and lets the user set her own self-assigned goals. The pattern comprises a Goals ViewContainer, in which a List ViewComponent enables the user to select the goal to visualize. The selection of a goal causes the display of the details of the chosen goal, which typically comprises the target value and the current value of the goal indicator. A Form ViewComponent (SetSelfGoal) enables the user to set a new goal, by inputting the goal type, a target value of the indicator and a deadline. When the user submits the data about the new self-established goal, the SetUserGoal action is triggered, which calls the GE CreateGoal service to update the Gamification Database. Upon the successful completion of the SetUserGoal action, the Goals ViewContainer is re-displayed, with the list of goals updated.
The basic version of the pattern can be enhanced by enriching the display of the current status of a goal with further information, e.g., a prediction of whether the current value of the indicator is such that the goal will be met by the deadline or else the user must increase her level of activity to attain the objective.
7.4 Gamified User Profile
The Gamified User Profile pattern, illustrated in Figure 15, shows a summary of the user status and progress in the gamification exercise. The pattern comprises a User Profile ViewContainer with three ViewComponents: a
Figure 14: IFML pattern for Goal Selection and Progress.
**UserDetails** ViewComponent displays both general personal information, such as the user name and the profile photo, and gamification-specific data, such as the total points; a **Badges** List ViewComponent summarizes the acquired badges in the different thematic areas, and an **ActionHistory** List ViewComponent presents the actions performed by the user, in descending order of recentness. For each action, the execution time, the description of the action type and the granted points are displayed.
### 7.5 Leader Board
The **Leader Board** pattern consists of a Detail ViewComponent (**UserPointSummary**), showing the user points summary and personal information, and one or more List ViewComponents, displaying a ranked list of the users sorted by the selected performance criteria (points, badges, etc). The
exemplary pattern in Figure 16 includes two ranked lists. The first list (PeriodicLeaderBoard) shows the user performance in the current period (e.g., in the current week or month). The second list is an overall leader board that considers the entire duration of the gamification exercise. Both lists have a conditional expression that filters the users belonging to the same gamification group, so that users are compared with “competitors” with homogeneous characteristics.
7.6 Achievement Notification
The Achievement Notification pattern, shown in Figure 17, illustrates the interplay of notifications with the application views and the interaction of the user with the notifications. The pattern comprises a generic BusinessOpera-
Figure 16: IFML pattern for Leader Board.
tions ViewContainer, which represents the interface whereby users perform the application activities, concisely represented by the AvailableTasks List ViewComponent; the ViewContainer also includes a details ViewComponent (AchievementNotification), which displays the notification signalled by the AchievementAssigned system event raised by the Notification Engine. The AchievementNotification ViewComponent shows only the title of the notification, to avoid cluttering the business view; however, the user can trigger the ViewNotification event to access a separate ViewContainer (Notifications) where she can inspect the whole content of the message; the ViewContainer comprises two ViewComponents: the NotificationText ViewComponent shows the full content of the current notification, including the
description of the achievement; the \textit{AllNotifications} ViewComponent lists the received notifications, so that the user can inspect also the past messages. If the user wants to get more details about a current or past notified achievement, she can trigger the \textit{ViewAchievementDetails} event, which causes the display of the \textit{Badges} ViewContainer; this comprises a \textit{BadgeDetails} ViewComponent showing the data (title, description, icon, and level) of the badge associated with the selected notification; the \textit{Badges} ViewContainer also comprises an \textit{AllBadges} List ViewComponent displaying all badges of the thematic area; badges already acquired by the user should be highlighted by the implementation of the \textit{AllBadges} ViewComponent.
Figure 17: IFML pattern for Achievement notification.
The pattern can be extended in several ways: 1) including an event to dismiss a notification, so that it will no longer appear in the \textit{AllNotifications} ViewComponents; 2) and adding to the \textit{Badges} ViewContainer further components to see also the badges of other thematic areas.
7.7 Reward Visualization and Redemption
The \textit{Reward Visualization and Redemption} pattern, shown in Figure 18, captures the user interaction for viewing the available rewards and for claiming one of them. The pattern comprises the \textit{Credits} ViewContainer, which includes the \textit{UserCredit} ViewComponent summarizing the user total points and the \textit{AvailableRewards} ViewComponent showing the rewards that the user can claim. Triggering the \textit{SelectReward} event, the user accesses the \textit{RedeemReward} ViewContainer, which comprises a ViewComponent (\textit{RewardDetails}) that displays the title, description and image of the selected reward. A Form
ViewComponent (EnterShipmentData) lets the users claim the reward by providing shipment details. When the user submits the form, the RedeemReward action is fired, which calls the corresponding GE service. After the successful completion of the RedeemReward action, the user is led to the Confirmation ViewContainer, where a ViewComponent presents the reward details and the confirmation code.

The basic pattern can be extended by making the shipping status visible in the gamified application, so to keep the user engaged and avoid the need of providing shipment information through other channels, such as the email.
8 Case Studies
This section discusses the use of the proposed model-driven pattern-based methodology in the development and evolution of two real world applications in the area of environmental awareness: SmartH2O and enCOMPASS. The applications have been developed with WebRatio\(^9\), a model-driven development environment based on IFML, which supports the definition of reusable modules for patterns and code generation for web and mobile platforms.
---
\(^9\) [https://www.webratio.com/](https://www.webratio.com/)
8.1 The SmartH2O project
SmartH2O is a project aimed at engaging consumers in water saving and at enabling water utilities to better manage water demand thanks to quasi-real time consumption data [30]. An ICT platform collects residential water smart meter data and a client application allows consumers to visualize their water consumption and to receive water saving tips (an example is shown in Figure 19a). The SmartH2O client application exploits gamification to motivate users to change their water consumption behaviour using virtual, physical, and social incentives. The gamified application assigns points to each user access (pattern Gamified login and Home page) and to a variety of actions (pattern Gamified Action), including filling-in profile information, reading tips, watching videos, sharing tips on social networks and inviting friends. Users can check platform-assigned goals and set their own objectives (pattern Goal Selection and Progress). Weekly gamification deadlines are established and two leader boards are used: weekly and overall. Figure 19b shows the interface of the pattern Leader Board. The top-3 users are notified via email of their achievement (pattern Achievement notification) and receive prizes, such as water-related board games, tickets for museums, and gift cards. Figure 19c illustrates the interface of the pattern Reward Visualization and Redemption. Finally, users can monitor their progress through a profile widget (pattern Gamified User Profile), which summarizes the water consumed in the period, the obtained points, the acquired badges and the executed actions.
SmartH2O was deployed in a small municipality in Canton Ticino in Switzerland, and in Valencia, a large urban centre in Spain. Thanks to its use, an average reduction in consumption of 10% in Switzerland and of 20% in Spain has been observed [34]. After the end of the project, the participants kept the water saving habits, which provides evidence of the long-lasting behavioral change effects that a gamified platform can have over a community.
8.2 The enCOMPASS project
enCOMPASS is an ongoing project that implements a socio-technical approach to behavioural change for energy saving. It develops innovative tools to make energy consumption data understandable to different type of users, from residential consumers and school pupils to utility managers, empowering them to collaborate in order to achieve energy savings and manage their energy needs in efficient, cost-effective and comfort-preserving ways [13]. Smart meters and sensors collect energy consumption data and indoor in-
Figure 19: SmartH2O Views representing the patterns: a) Gamified Action, b) Leader Board, c) Reward Visualization and Redemption
dicators, such as temperature, humidity, and luminosity. Data are analyzed to infer the user activity and comfort standards and to provide personalized energy saving recommendations, based on the user’s profile, habits, and preferred comfort level. A mobile application lets users explore consumption data under multiple visualizations, the indoor climate and comfort indicators, and personalized energy saving recommendations. Gamification is exploited to improve engagement and motivate the users to provide feedback about the personalized recommendations and their comfort levels. The gamification elements are divided into three thematic areas: learning, saving and profiling. The energy saving area encourages users to establish a saving goal at the beginning of every month. Figure 20a shows the realization of the Goal Selection and Progress pattern. A battery metaphor represents the goal indicator value, i.e., the amount of energy already consumed in the month, and the distance between the current value and the goal target. Users that reach the saving goal receive points proportional to the reduction goal. An Achievements page, shown in Figure 20b, realizes the Gamified User Profile pattern and lets the user check their progress in the thematic areas and browse their action history. In-app notifications and mobile alerts are regularly sent to notify users about the important events in the platform. Figure 20c shows the
implementation of the Achievement Notification pattern in the Home page of the app. The enCOMPASS client application implements all the other patterns introduced in Section 7, applies gamification to a broad spectrum of actions, and provides a rich set of views, which blend the display of business and gamification data.

enCOMPASS is currently used by households, schools, and public buildings in three sites in Switzerland, Germany and Greece. The first analysis reveals a 10 to 12% consumption reduction for the residential consumers [21]. A complete analysis is planned at the end of the project, to understand the overall effect of the intervention in households, public buildings and schools.
### 8.3 Discussion
Gamification is an excellent case for pattern-based model driven development, because: 1) it relies on well-defined functions that apply, with variations, across all application domains; 2) it requires quick evolution, to adapt the gamification rules to the users behavior; 3) it intersects all the tiers of an application and integrates within multiple views of the business inter-
face. Expressing gamification patterns in a platform-independent modeling language, such as IFML, provides several benefits: 1) it allows developers to focus on the core elements of the gamification (which actions to gamify, what rules to establish for controlling the execution and rewarding of actions, how to blend business and gamification views) in a high level way, deferring lower level, yet fundamental, aspects such as the visualization of the patterns to the later stage of code generation; 2) it allows design decisions about gamification to be factored out of the application source code, facilitating the evolution of the same application and the porting of design decision from one application to another. These benefits, which are generally useful for all systems, are essentials for user-centric gamified applications, where the main objective is engagement and retention. The ability to change the gamification features quickly allows the fast implementation of such critical updates as the addition of new engagement stimuli, more effective visualizations of the user’s progress, and countermeasures to avert undesired behavior.
8.3.1 Analysis of the case studies
The SmartH2O and enCOMPASS experience demonstrated the usefulness of an application-agnostic gamification architecture and of model-driven gamification patterns in several stages of the development and maintenance process.
The principal lessons learned from the application of the proposed approach can be summarized as follows:
- The use of a formal Domain Model helped align the terminology and concepts across heterogeneous stakeholders and reason on the nuances of gamification early in the project. The distinction among actions, goals, and achievements (either rewards or badges) and the classification of the different types of progress monitoring deadlines helped framing the requirements quickly. The Domain Model concepts generalized well from the SmartH2O project to the enCOMPASS project, despite the greater complexity of the latter in terms of collected data, types of users, gamification rules, and visualization requirements.
- The availability of a “catalog” of front-end patterns helped reduce the space of the possible interface designs to a manageable size, which in turn enabled the rapid convergence to an accepted application configuration. Front-end patterns were mocked-up and different assemblies were discussed during the prototyping phase, speeding up consensus.
• The *Gamified Action* pattern proved the most useful, as it embodies the essence of gamification, which is the capture of specific users’ actions that should be tracked and rewarded. The pattern distinguishes the application-dependent parts (e.g., the GUI for accomplishing the specific gamified task) and the application-agnostic parts (the tripartite structure selection-execution-confirmation and the signaling of the action to the Gamification Engine). Its use helped regularize the application design across very different tasks.
• The *Goal Selection and Progress* pattern proved the most complex to apply. The high-level nature of the pattern, which speaks in terms of generic business indicators, baselines and progress visualization, required a rather intense domain-specific customization to embed it into the concrete application. This may prompt for the identification of sub-patterns, adapted to less generic cases (e.g., distinguishing automatically assigned and self-set goals, periodic and non-periodic goal checking, different forms of progress prediction and visualization, etc.).
• The adoption of a pattern-based MDE approach shifted most effort to the presentation customization phase, which had to be realized manually by implementing ad hoc presentation templates applied to the IFML patterns during code generation. This is a well-known problem of MDE in general, which remains the bottleneck also in gamified, pattern-based applications. We are working on methods to simplify the integration of handwritten and automatically generated code to alleviate the burden of customizing the visualization of presentation-agnostic design patterns [2].
As a final remark, we note that the cross-domain nature of model-driven gamification patterns is further supported by the fact that the design schemes discussed in Section 7 and employed in SmartH2O and enCOMPASS are the same employed in a quite distinct gamification project, the technical support community of WebRatio\(^{10}\), where the business data, the business views, and the target users are extremely different.
\(^{10}\) https://www.webratio.com/community
9 Conclusions and future work
The paper describes a pattern-based model-driven methodology for the development of gamified applications, which expresses the gamification concepts within a Domain Model, a run-time architecture and a set of patterns, to facilitate the integration of gamification into existing or new applications. The identified patterns embody recurrent features of gamified application, which are synthesized into IFML models that promote reuse and customization through Mode-Driven Engineering techniques. The proposed approach was put to work in two real-world scenarios, showing that the model-driven encoding of gamification patterns promotes the reuse of design knowledge independently of the technology domain (web, mobile web and native mobile) and across applications. Factoring gamification rules out of the code in a gamification data model enabled the fast (re)configuration of the gamification engine and eased the adaptation to different scenarios. In the future, the proposed methodology can be extended to cover other aspects not addressed by IFML, such as the visualization patterns of the gamification elements. Presentation-oriented models could be devised to capture domain-specific specializations of the IFML components, such as Goal Status or Gamified Action Widget, which incorporate the knowledge on how to present gamification elements present in IFML patterns in a language-independent way. The Gamification Engine can also be extended by adding a data analysis component for the automatic detection and correction of undesired behaviours, such as spamming, or for the automatic adaptation of rules and points based on the user activity.
Acknowledgements This work is partially supported by the “enCOMPASS - Collaborative Recommendations and Adaptive Control for Personalised Energy Saving” project funded by the EU H2020 Programme, grant agreement no. 723059.
References
|
{"Source-Url": "https://www.encompass-project.eu/wp-content/uploads/2020/01/Model_Driven_development_of_gamified_applications.pdf", "len_cl100k_base": 12143, "olmocr-version": "0.1.50", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 72127, "total-output-tokens": 16406, "length": "2e13", "weborganizer": {"__label__adult": 0.00048613548278808594, "__label__art_design": 0.0010995864868164062, "__label__crime_law": 0.00035858154296875, "__label__education_jobs": 0.0018281936645507812, "__label__entertainment": 0.00018715858459472656, "__label__fashion_beauty": 0.0003211498260498047, "__label__finance_business": 0.0005545616149902344, "__label__food_dining": 0.00044608116149902344, "__label__games": 0.00347137451171875, "__label__hardware": 0.0016641616821289062, "__label__health": 0.0006108283996582031, "__label__history": 0.0006003379821777344, "__label__home_hobbies": 0.0001583099365234375, "__label__industrial": 0.00067901611328125, "__label__literature": 0.0004467964172363281, "__label__politics": 0.0003311634063720703, "__label__religion": 0.0005316734313964844, "__label__science_tech": 0.07501220703125, "__label__social_life": 9.834766387939452e-05, "__label__software": 0.00970458984375, "__label__software_dev": 0.8994140625, "__label__sports_fitness": 0.0005507469177246094, "__label__transportation": 0.000903606414794922, "__label__travel": 0.0003020763397216797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 71437, 0.02276]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 71437, 0.33926]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 71437, 0.89636]], "google_gemma-3-12b-it_contains_pii": [[0, 1606, false], [1606, 4353, null], [4353, 6598, null], [6598, 9548, null], [9548, 12475, null], [12475, 13685, null], [13685, 15734, null], [15734, 17019, null], [17019, 18861, null], [18861, 20552, null], [20552, 22081, null], [22081, 24298, null], [24298, 25836, null], [25836, 28590, null], [28590, 30850, null], [30850, 30886, null], [30886, 33110, null], [33110, 35484, null], [35484, 38113, null], [38113, 38174, null], [38174, 38870, null], [38870, 39777, null], [39777, 42224, null], [42224, 43801, null], [43801, 45388, null], [45388, 46267, null], [46267, 47005, null], [47005, 47850, null], [47850, 49678, null], [49678, 50897, null], [50897, 53507, null], [53507, 55076, null], [55076, 56334, null], [56334, 58812, null], [58812, 60953, null], [60953, 63397, null], [63397, 66574, null], [66574, 69924, null], [69924, 71437, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1606, true], [1606, 4353, null], [4353, 6598, null], [6598, 9548, null], [9548, 12475, null], [12475, 13685, null], [13685, 15734, null], [15734, 17019, null], [17019, 18861, null], [18861, 20552, null], [20552, 22081, null], [22081, 24298, null], [24298, 25836, null], [25836, 28590, null], [28590, 30850, null], [30850, 30886, null], [30886, 33110, null], [33110, 35484, null], [35484, 38113, null], [38113, 38174, null], [38174, 38870, null], [38870, 39777, null], [39777, 42224, null], [42224, 43801, null], [43801, 45388, null], [45388, 46267, null], [46267, 47005, null], [47005, 47850, null], [47850, 49678, null], [49678, 50897, null], [50897, 53507, null], [53507, 55076, null], [55076, 56334, null], [56334, 58812, null], [58812, 60953, null], [60953, 63397, null], [63397, 66574, null], [66574, 69924, null], [69924, 71437, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 71437, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 71437, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 71437, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 71437, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 71437, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 71437, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 71437, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 71437, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 71437, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 71437, null]], "pdf_page_numbers": [[0, 1606, 1], [1606, 4353, 2], [4353, 6598, 3], [6598, 9548, 4], [9548, 12475, 5], [12475, 13685, 6], [13685, 15734, 7], [15734, 17019, 8], [17019, 18861, 9], [18861, 20552, 10], [20552, 22081, 11], [22081, 24298, 12], [24298, 25836, 13], [25836, 28590, 14], [28590, 30850, 15], [30850, 30886, 16], [30886, 33110, 17], [33110, 35484, 18], [35484, 38113, 19], [38113, 38174, 20], [38174, 38870, 21], [38870, 39777, 22], [39777, 42224, 23], [42224, 43801, 24], [43801, 45388, 25], [45388, 46267, 26], [46267, 47005, 27], [47005, 47850, 28], [47850, 49678, 29], [49678, 50897, 30], [50897, 53507, 31], [53507, 55076, 32], [55076, 56334, 33], [56334, 58812, 34], [58812, 60953, 35], [60953, 63397, 36], [63397, 66574, 37], [66574, 69924, 38], [69924, 71437, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 71437, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
3631305b8cf5083fd9e60d90ddc70c682140b45f
|
[REMOVED]
|
{"Source-Url": "https://verify.rwth-aachen.de/giesl/papers/IJCAR24-probCFR.pdf", "len_cl100k_base": 8414, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 46747, "total-output-tokens": 12298, "length": "2e13", "weborganizer": {"__label__adult": 0.000598907470703125, "__label__art_design": 0.0006284713745117188, "__label__crime_law": 0.0008115768432617188, "__label__education_jobs": 0.001354217529296875, "__label__entertainment": 0.0002007484436035156, "__label__fashion_beauty": 0.0003025531768798828, "__label__finance_business": 0.000591278076171875, "__label__food_dining": 0.000690460205078125, "__label__games": 0.0018138885498046875, "__label__hardware": 0.0012798309326171875, "__label__health": 0.0017719268798828125, "__label__history": 0.000545501708984375, "__label__home_hobbies": 0.0002005100250244141, "__label__industrial": 0.0010471343994140625, "__label__literature": 0.0006361007690429688, "__label__politics": 0.0006818771362304688, "__label__religion": 0.00093841552734375, "__label__science_tech": 0.274169921875, "__label__social_life": 0.00019490718841552737, "__label__software": 0.006519317626953125, "__label__software_dev": 0.703125, "__label__sports_fitness": 0.0006594657897949219, "__label__transportation": 0.0010328292846679688, "__label__travel": 0.0003082752227783203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34287, 0.054]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34287, 0.26089]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34287, 0.73711]], "google_gemma-3-12b-it_contains_pii": [[0, 2485, false], [2485, 6193, null], [6193, 10029, null], [10029, 13902, null], [13902, 18635, null], [18635, 22151, null], [22151, 25121, null], [25121, 27679, null], [27679, 30468, null], [30468, 33170, null], [33170, 34287, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2485, true], [2485, 6193, null], [6193, 10029, null], [10029, 13902, null], [13902, 18635, null], [18635, 22151, null], [22151, 25121, null], [25121, 27679, null], [27679, 30468, null], [30468, 33170, null], [33170, 34287, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34287, null]], "pdf_page_numbers": [[0, 2485, 1], [2485, 6193, 2], [6193, 10029, 3], [10029, 13902, 4], [13902, 18635, 5], [18635, 22151, 6], [22151, 25121, 7], [25121, 27679, 8], [27679, 30468, 9], [30468, 33170, 10], [33170, 34287, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34287, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
62513535897f988bffd5c36c20a5b3165a895b54
|
Fault Tolerant Distributed Computing using Asynchronous Local Checkpointing
Phillip Kuang
Dept. of Computer Science
Rensselaer Polytechnic Institute
kuangp@rpi.edu
John Field
Google
jfield@google.com
Carlos A. Varela
Dept. of Computer Science
Rensselaer Polytechnic Institute
cvarela@cs.rpi.edu
ABSTRACT
The transactor model, an extension to the actor model, specifies an operational semantics to model concurrent systems with globally consistent distributed state. The semantics formally tracks dependencies among loosely coupled distributed components to ensure fault tolerance through a two-phase commit protocol and to issue rollbacks in the presence of failures or state inconsistency. In this paper, we introduce the design of a transactor language as an extension of an existing actor language and highlight the capabilities of this programming model. We developed our transactor language using SALSA, an actor language developed as a dialect of Java. We first develop a basic transactor SALSA/Java library, which implements the fundamental semantics of the transactor model following the operational semantics’ transition rules. We then illustrate an example program written using this library. Furthermore, we introduce a state storage property known as the Universal Storage Location as an extension of the Universal Actor Name and Universal Actor Locator abstractions from SALSA that levies a storage service to maintain checkpointed transactor states. The transactor model guarantees safety but not progress. Therefore, to help develop realistic transactor programs that make progress, we introduce the Consistent Distributed State Protocol and Ping Director that improve upon the Universal Checkpointing Protocol to aid transactor programs in reaching globally consistent distributed states.
Categories and Subject Descriptors
D.3.3 [Programming Languages]: Language Constructs and Features—concurrent programming structures; D.4.5 [Operating Systems]: Reliability—checkpoint\ restart, fault-tolerance; D.1.3 [Programming Techniques]: Concurrent Programming—distributed programming
General Terms
Design, Languages, Reliability
Keywords
Actor, Distributed state, SALSA, Transactor
1. INTRODUCTION
The transactor model introduced by Field and Varela is defined to be a “fault tolerant programming model for composing loosely-coupled distributed components running in an unreliable environment such as the Internet into systems that reliably maintain globally consistent distributed state”[5, 3]. Therefore, transactors allow for guarantees about consistency in a distributed system by introducing semantics on top of the actor model that allows it to track dependency information and establish a two phase commit protocol in such a way that a local commit succeeds only if local state is globally consistent. This allows transactors to recognize reliance on other transactors and how they directly influence its own current state. As an extension of the actor model [1], transactors inherit the core semantics of encapsulating state and a thread of control to manipulate its state as well as communication through asynchronous messaging. In addition to these, transactors introduce new semantics to explicitly model node failures, network failures, persistent storage, and state immutability. We assume the reader is familiar with the transactor model [5, 3, 7] and will focus in this paper on its implementation.
This paper presents a working implementation of the transactor model as a step toward developing a language to compose programs that follow an actor oriented programming paradigm that inherently maintains global state [7]. To do this we used the SALSA actor language [9, 11, 10] as a base from which we overlay transactor semantics similar to how transactors naturally extend the actor model. This allows users to build loosely coupled distributed systems without a need for central coordination and takes into consideration the high latencies of a wide area network where node and link failures are common occurrences. Our implementation also promotes further research on the transactor model as well as reasoning about composing transactor programs and fault tolerance.
The remainder of this paper is structured as follows: Section 2 provides some background information and important definitions from the transactor model. Section 3 formally describes our implementation of transactors. Section 4 describes how our implementation handles persistent state storage. Section 5 introduces a useful transactor abstraction known as the Proxy. Section 6 presents our Consistent
Distributed State Protocol and the Ping Director that allow for creating programs that maintain globally consistent state. Section 7 shows new syntax added to SALSA that encodes the transactor semantics. Section 8 describes a detailed house purchase program example. Section 9 presents some related work. Finally, section 10 concludes with a discussion and future work.
2. BACKGROUND
In this section we provide a very brief summary of the transactor model and describe important terms used in the rest of the paper. We refer the reader to [5, 3, 7] for a formal definition of the model, which includes a complete operational semantics.
A transactor is composed of three key components: state encapsulation, a thread of control that represents its behavior, and a worldview. The state of a transactor can consist of two versions: persistent and volatile. A persistent state is that which has been committed to stable storage and is able to survive failures so it may be reverted to if necessary. A volatile state is one that is vulnerable to failures until it has been committed and holds all changes that differ from a previously committed state. A transactor itself is said to be permanent if it has made an initial commit to obtain a persistent state, otherwise it is regarded as ephemeral meaning it will be annihilated upon failure. A transactor’s behavior defines its response to incoming messages. Similar to an actor, when a transactor receives a message, it may create new transactors, send messages or modify its own state. In addition to actor primitives, it also has the option to stabilize, checkpoint, and rollback. Stabilization is considered the first step of a two-phase commit protocol and makes a transactor immutable until a checkpoint or rollback occurs. The second step of the two-phase commit is a checkpoint that, if successful, commits the transactor’s current state and guarantees consistency among peer transactors. That is, the current transactor state does not have a dependence on any other volatile transactor states. Lastly rollback brings a transactor back to a previously checkpointed state.
The worldview abstracts over currently known dependency information and has three components: a history map, dependency graph, and root set. The history map is a collection of mappings of transactor names to transactor histories. The history of a transactor abstracts over how many times it has checkpointed and rolled back in the past. A history has three defining properties: a volatility value, incarnation value, and incarnation list. A history’s volatility value indicates whether the current transactor is stable. Its incarnation value is a zero based numerical value which is incremented every time a rollback occurs. A checkpoint would append the current value to the history’s incarnation list and reset its incarnation to maintain a record of past checkpoints and rollbacks. A dependency graph is a set of transactor dependencies represented as directed edges on transactor names. The root set captures dependencies of message payloads.
Dependency information is tracked by passing worldviews along with messages to other transactors. On reception of a message, a worldview union algorithm is applied to the current and received view, which reconciles these two views into a most up-to-date view. Through this algorithm, the transactor model is able to propagate dependency information among interacting transactors. Dependencies are inherited and created by recognizing state mutations as a consequence of evaluating messages and are recorded appropriately by the worldview.
3. IMPLEMENTATION
Our language is first developed as a transactor library on top of the actor library used by SALSA compiled programs [9]. Figure 1 shows the class hierarchy diagram of our transactor library. A transactor is encoded in the transactor.language.Transactor class that extends and inherits from the salsalanguage.UniversalActor class. In addition to the semantics inherited from a SALSA actor we create Java classes that encapsulate the semantics of a transactor worldview and history. Each transactor instantiates a Worldview but dependency semantics are meant to be transparent to the user. Similar to how SALSA instantiates a Mailbox but message semantics are meant to be transparent to the user. Similar to how SALSA implements a mailbox to handle message reception transparently, worldview operations are handled internally and the user cannot directly access such information except with supplied transactor primitive operators.
3.1 Message Passing
Message sending is inherited from SALSA as potential method invocations. We leverage the existing actor message handling implementation and add to the payload dependency information to accommodate the transactor model. Just as in SALSA, message sending is asynchronous and message processing is sequential though the ordering of messages is not guaranteed. Message parameters are pass by value to ensure there is no shared memory between transactor states.
We provide two methods to the Transactor class that implements transactor message handling:
```java
void sendMsg(String method, Object[] params, Transactor recipient);
void recvMsg(Message msg, Worldview msg_wv);
```
sendMsg(...) implements a message send by taking as arguments a string, method, that represents the type of mes-
State mutation and retrieval is done with the following two transactor methods:
```java
boolean setState(String field, Object newValue);
Object getState(String field);
```
`setState(...)` takes as arguments a string that represents the field being modified and the `newValue` to mutate the state with. Java reflection is used to reference the appropriate field in its state and mutation is done by replacing the value with the new value. State fields are therefore inherently immutable so set states are actually creating new the value with the new value. State fields are therefore inappropriately field in its state and mutation is done by replacing the state with. Java reflection is used to reference the appropriate field in its state and mutation is done by replacing the value with the new value. State fields are therefore inherently immutable so set states are actually creating new.
### 3.2 State Maintenance
State mutation and retrieval is done with the following two transactor methods:
```java
boolean setState(String field, Object newValue);
Object getState(String field);
```
`setState(...)` takes as arguments a string that represents the field being modified and the `newValue` to mutate the state with. Java reflection is used to reference the appropriate field in its state and mutation is done by replacing the value with the new value. State fields are therefore inherently immutable so set states are actually creating new.
### 3.3 Transactor Creation
Transactor creation is done with the following transactor method:
```java
Transactor newTActor(Transactor new_T);
```
We use this method to extend the usual call to the `new` keyword in order to instantiate the newly created transactors worldview to reflect dependence on its parent. The `new_T` argument is an instantiated object of the transactor class to be created. The new transactor inherits the history map and dependency graph of the parent augmented with the new transactor’s name and dependencies laid on the new transactor by the names in the parent’s root set. Both parent and new child transactor will reflect the same history map and dependency graph but the parent will append the new transactor’s name in its root set while the child starts with a fresh root set. This method returns the same reference to the new instantiated transactor with an updated worldview.
The returned reference must then be type casted back to the constructed transactor class. The following code sample shows use of this method to create a new `FooBar` transactor:
```java
FooBar FObject = (FooBar) newTActor(new FooBar());
```
### 3.4 Fault Tolerance
Stabilization, checkpointing and rollbacks are provided in the form of the following three transactor operator methods:
```java
void stabilize();
void checkpoint();
void rollback(boolean force, Worldview updatedWV);
```
`stabilize()` updates the transactor history volatility value to be stable and stores the current transactor state in stable storage if it is not already stable. `checkpoint()` marks the stored stable state as persistent, overwriting previous persistent states, if the transactor is independent and stable and clears its worldview. `rollback(...)` performs state reversion to the most recent checkpoint. The arguments `force` and `updatedWV` are used when an implicit rollback is caused by being invalidated by a received message. By passing a `true` to the first argument we can force the transactor to rollback under this scenario even if it is stable. The second argument represents the updated worldview obtained by the worldview union algorithm so the rolled back state reflects this information.
Implementation of a state rollback is inspired by SALSA actor migration. Each transactor is inherently a SALSA actor, which encapsulates state in a thread of control so we handle rollbacks by halting the current thread and starting a new thread from a preserved checkpointed state and attaching the transactor name to it. However, before doing so, we create a special placeholder transactor, defined by the `transactor.language.Rollbackholder` class, to buffer incoming messages while the rollback operation is taking place. We register this placeholder state with the current transactor name under the SALSA naming service so messages can be routed correctly. We then read the checkpointed state from stable storage and tell the local system to start the transactor state as a new thread and reassign its name with the naming service. All buffered messages from the placeholder are then forwarded to the newly reloaded transactor’s mailbox and normal processing resumes.
### 4. PERSISTENT STATE STORAGE
#### 4.1 Universal Storage Locator
In order to handle persistent state storage, we introduce the `Universal Storage Locator` (USL) to represent the location where checkpoints will be made. This location can be the local system, a remote server, or even the cloud, allowing the user to specify the optimal location to create persistent storage. The USL is inspired from the `Universal Actor Name` (UAN) and `Universal Actor Locator` (UAL) in SALSA and is a simple uniform resource identifier. Some examples of USLs are shown below. The first USL indicates local storage, the second indicates remote storage on a specified FTP server, and the last USL specifies storage on Amazon’s Simple Storage Service (S3) cloud storage.
```plaintext
Universal Storage Locator
Universal Actor Name (UAN)
Universal Actor Locator (UAL)
Local Storage
Remote Storage
Amazon's S3 Cloud Storage
```
file://path/to/storage/dir/
ftp://user:pw@domain.com:1234/path/to/storage/dir/
http://s3.amazonaws.com/bucket/
Transactors are instantiated with a USL and if none is specified, checkpoints are made locally in the current directory. Specifying a USL is similar to specifying UAN and UAL in SALSA:
HelloWorld helloWorld = new HelloWorld();
at (new UAN("user://nameserver/id"),
new UAL("rmsp://host1:4040/id"),
new USL("file://path/to/storage/directory/"));
When a transactor checkpoints it will reference its USL to serialize its state and store a <transactor-name>.ser file at the location given by its USL. A rollback will reference the same file at the USL location to retrieve and de-serialize its state. The implementation of a USL also allows the possibility of mobile transactors similar to how a SALSA actor’s UAN and UAL allow it to perform migration. Separating a transactor’s storage location makes it location independent, allowing it to migrate as opposed to a locally checkpointing transactor. However further research still needs to be done on modeling mobile transactors whose state may be location dependent.
4.2 Storage Service
Here we introduce the Transactor Storage Service, a service class that handles performing serialization/de-serialization of a transactor’s state and storing/retrieving it at the transactor’s USL. We implement this service as an interface shown in Figure 2. This simple interface has two methods for storing and retrieving state. We chose to create an interface to give the user the ability to implement his or her own desired serialization technique and USL protocol. Doing so gives the user the flexibility to define the optimal implementation that best caters to the given program specifications and performance requirements. For example a user might wish to
public interface TSTorageService {
public void store(Object state, URI USL);
public void Object get(URI USL);
}
Figure 2: Transactor storage service interface
use a FTP server to handle checkpoints and will create USLs with the ftp:// scheme and implement the store(...) and get(...) methods to handle the FTP protocol with authentication. A high performance program can implement the use of cloud storage that has many benefits to program performance such as data redundancy and locality. Another high performance example is an implementation that utilizes memory storage instead of persistent storage to achieve fast checkpoint and rollback calls in a program that disregards the possibility of node failures.
5. PROXY TRANSACTORS
The proxy transactor is a special transactor whose task is to pass along messages it receives without affecting the dependencies of those messages. Similar to a network proxy, a proxy transactor routes messages to other transactors and in doing so must not introduce any new dependencies on that proxy. Proxy transactors can prove useful in order to provide privacy for a certain resource or perform message filtering. We implement this abstraction by creating a Proxy transactor class that extends the Transactor class. By doing so we inherit all the semantics of a traditional transactor, however we will override the message send and receive implementations to prevent inserting volatile dependencies. We do so by simply issuing an explicit call to stabilize prior to sending or processing a new message. By stabilizing before sending a message, we guarantee that the recipient transactor remains independent with respect to the proxy. This affects situations where the proxy may perform a get state introducing its name to the message root set and the recipient subsequently performing a set state creating new dependencies on the names in the message root set. If the recipient wishes to perform a checkpoint in the future then its worldview would have knowledge of the proxy being stable and therefore not impede it from doing so. We perform stabilization before processing a message upon reception to guarantee new dependencies are not introduced to the proxy. By being stable, any set state calls while processing a message become no-ops and therefore the proxy will not inherit any new dependencies from the message. Lastly, to eliminate any transitive dependencies that stem from a proxy, we restrict proxy creation only to transactors who meet two conditions: the transactor must be independent and stable and the names in the transactor’s root set must also be independent and stable. We reason that this is logical because any invalidation of the parent transactor or transactors whose state resulted in the creation of the proxy will also invalidate the proxy and possibly any recipients of the proxy messages. This would be inconsistent with the semantics of a proxy transactor.
6. CONSISTENT DISTRIBUTED STATE
6.1 Consistent Distributed State Protocol
To aid in composing transactor programs, we introduce the Consistent Distributed State Protocol (CDSP)
This protocol draws inspiration from the Universal Checkpointing Protocol (UCP) presented in [5]. The UCP was developed to ensure the liveness property of the tau calculus under a set of preconditions. If these preconditions are met then global checkpoints are established through this protocol. However, a strict precondition of the UCP states that no failures can occur while the UCP is taking place and no transactors will rollback during the UCP. This assumes previous application dependent communication and a fault resistant system to guarantee these conditions are met. While it is proven that global checkpoints are possible in this type of situation, any failure would render the UCP useless. Such failures may halt program progress if the rest of the program is unaware of the failure without extra communication. Therefore we have introduced this new protocol to ensure global consistent states can be reached even in the presence of failures. From a theoretical perspective, the CDSP guarantees the
---
1This example is written with syntactic sugar which compiles to the newTActor(Transactor new_T) method.
2In [7] this is called the Consistent Transaction Protocol.
In our proposed protocol we define 5 preconditions:
1. The transitive closure of all participants and their dependencies accrued during the CDS update must be known ahead of time and each participant must be able to receive and issue ping messages.
2. There must be isolation of the participating transactors during the CDS update; i.e., communication only within the set of participants and messages may not be received from an outside transactor that would introduce new dependencies.
3. Each participant starts from a state that is independent of any transactors other than those among the participants. We also assume the outside agent who sends the trigger message will not introduce any dependencies on itself or any other outside transactors.
4. Each participant must be stable at the end of the CDS update unless it has rolled back at some point during the CDS update.
5. The coordinator must be able to recognize a CDS update has come to an end and indicates start of the consistency protocol.
Once a CDS update completes, each participant will send ping messages to all other participants and attempt a checkpoint if it is independent. On reception of a ping message, the transactor will also attempt a checkpoint.
Since each transactor arrives at a stable state at the end of a CDS update if it has not rolled back, a checkpoint succeeds if it is independent or has received enough ping messages to know it is independent. In the case of failure, a rolled back transactor is volatile at the end of a CDS update so all checkpoint calls will be no-ops. On the other hand, ping messages sent out from the failed transactor will alert all those who were dependent on it and invalidate them, causing them to also rollback. Therefore a globally consistent state is reached at the end of the CDS update through this protocol. This protocol also exemplifies eager evaluation of dependencies as opposed to the natural lazy evaluation of the transactor model.
### 6.2 Ping Director
In order to accommodate the CDSP we introduce a new abstraction known as the Ping Director shown in Figure 3. The Ping Director is responsible for triggering the CDSP by requesting all participants to ping each other. We also extend the transactor with a new operator:
```java
void startCDSUpdate(Transactor[] participants,
Transactor coordinator,
String msg,
Object[] msg_args);
```
and three additional message handlers native to all transactors:
```java
void CDSUpdateStart(String msg, Object[] msg_args, PingDirector director);
void pingreq(Transactor[] pingreqs);
void ping()
```
The last two methods, pingreq(...) and ping(), give transactors the ability to send and receive ping messages. pingreq(...) takes an array of transactors and issues ping messages to each one, and ping() handles the reception of ping messages to attempt a checkpoint. The startCDSUpdate(...) method is a new transactor operator that is invoked by the outside agent who triggers the CDS update. This method takes as arguments the array of participants, the coordinator transactor to receive the trigger message, the trigger message, and the trigger message arguments. Internally this method will obtain a instance of the PingDirector that will handle the current CDS update and send a pingStart(...) message to the PingDirector instance with the array of participants, coordinator transactor reference, trigger message and its arguments. The PingDirector will then record in its state the array of participants and then send a CDSUpdateStart(...) message with the trigger message and its arguments and a reference to itself to the coordinator. The PingDirector also sends itself a ping() message to be described later. The CDSUpdateStart(...) method records the PingDirector instance reference in its state and sends the trigger message to itself to be processed and start the CDS update. Since messages from the PingDirector affect the state of the coordinator, we need the PingDirector...
behavior PingDirector extends Transactor {
private Transactor[] participants;
public PingDirector();
public void pingStart(Transactor[] participants,
Transactor coordinator, String msg,
Object[] msg_args);
public void ping();
public void endCDSUpdate();
}
Figure 3: PingDirector
to be independent so it will not affect the dependencies of
the CDS update. We do so by having the system create an
instance of the PingDirector through a new service known
as the CDSUpdateDirector. We access the CDSUpdateDi-
rector through the salsa.language.ServiceFactory
and request a new instance of the PingDirector instead of
explicitly creating one in the startCDSUpdate(...) method.
When the coordinator recognizes the completion of the CDS
update it will send a endCDSUpdate() message to the PingDi-
rector causing the PingDirector to stabilize. This stabil-
ization alerts the PingDirector that the CDS update is
complete. The PingDirector recognizes this alert through
the ping() it sent itself at the start of the CDS update. On
reception of a ping() message the PingDirector inspects its
volatility value as an indicator of if the CDS update has com-
pleted. Before the CDS update has completed, the PingDi-
rector will be volatile so we have the PingDirector resend
the ping() message to itself until it recognizes it has stabili-
zied in a polling manner. At that point the PingDirector
will send pingreq messages to every participant and pass to
each one the array of participants for them to ping.
Figure 4: Consistent distributed state protocol using
the PingDirector
The process of preparing and completing a CDS update is
shown in Figure 4. Fortunately, this protocol is simplified
by our abstraction and the user only needs to worry about
indicating the start and end of a CDS update. An example
of this abstraction being utilized is shown in the example
in section 8. We also note that a proxy transactor, described
in the previous section, cannot be designated as a coordinator
since it cannot alter its state to record a reference to the
PingDirector. Semantically, proxies have no effect on the
global dependency so therefore they do not participate in
the CDSP, being that they will always be consistent with
the global state.
We note that currently the CDSP assumes that the coordi-
nator and ping director are resistant to failure. However if
one of these agents failed then the CDSP would not be able
to be triggered. To accommodate for this possibility we pro-
pose extending the protocol to provide fault tolerance in the
form of redundancy. This can be done by assigning mul-
tiple coordinators where each would be able to recognize a
CDS update completion and trigger the protocol if one fails.
The same can be done with creating multiple ping directors
for a CDS update and supplying a reference to each one to
the coordinator. The exact details of implementing a fault-
tolerant CDSP are left as future work.
7. LANGUAGE SYNTAX
Similar to SALSA, transactor programs are written as actor
behaviors that are compiled into Java classes that extend
the transactor.language.Transactor class. Through this
inheritance chain, behaviors have access to an augmented set
of operators that include both actor and transactor primi-
tives. These operators can only be called by the transactor
itself and are not explicit message handlers; therefore other
transactors cannot directly issue a stabilize, checkpoint,
or rollback on another transactor. These operators must
be placed in message handlers inside the transactor’s behav-
ior. These operations are also sequential in nature, unlike
message sends, which are concurrent. We define here our
proposed syntax changes for our new transactor language
that extends the SALSA/Java syntax.
The following statements are added to SALSA’s syntax along
with the compiled transactor library code:
stabilize; ≡ this.stabilize();
checkpoint; ≡ this.checkpoint(); return;
rollback; ≡ this.rollback(false, null); return;
dependent; ≡ this.dependent();
self; ≡ this.self();
behavior <Identifier>
≡ behavior <Identifier> extends Transactor
behavior proxy <Identifier>
≡ behavior <Identifier> extends Proxy
startCDSUpdate(<ArgumentList>);
≡ this.startCDSUpdate(<ArgumentList>);
endCDSUpdate;
≡ this.sendMsg("endCDSUpdate", new Object[0],
((PingDirector)this.getTState("pingDirector")));
new <Transactor-Behavior>
≡ ((<Transactor-Behavior>)this.newTActor(
new <Transactor-Behavior>);
<State-Identifier>::=<Expression>;
≡ this.setTState("<State-Identifier>",<Expression>);
8. HOUSE PURCHASE EXAMPLE
This example simulates the subset of operations that might be performed by a collection of web services involved in the negotiation of a house purchase. Traditionally, a house purchase is a complex task that involves multiple parties and back and forth communication. Some steps required include appraising the desired house, searching for the title, applying for a mortgage, and making negotiations. We represent these operations using five services: the buySrv representing the buyer, the sellSrv representing the seller, the apprSrv representing the appraisal service, the lendSrv representing the mortgage lender, and the archSrv representing the title search service. Our example defines the following steps taken to complete a house purchase:
1. The buyer chooses a candidate house and initiates the buySrv to manage the house purchase process.
2. The buySrv contacts the appraisal service, apprSrv, in order to obtain the market value of the house.
3. The apprSrv contacts the sellSrv and requests basic information about the house.
4. The apprSrv combines the house specifications with other reference information to compute a tentative market price. This tentative market price is only an estimate, which is not a definite appraisal until an on-site visit is made to the house to verify the accuracy of the original specifications.
5. The buySrv makes an offer to the sellSrv based on the appraisal. The buySrv also contacts the archSrv to perform a title search and the lendSrv to obtain a mortgage.
6. The lendSrv contacts the apprSrv to confirm the appraisal information that is given after an on-site verification is completed.
7. The lendSrv approves the mortgage after a credit check and the buySrv will close the house purchase once it receives a response from the archSrv and the sellSrv accepts the offer.
The steps above describe a scenario where every step runs accordingly without any semantic failures. However, one possible way this house purchase may fail can be observed in step 6 in the case of the verification discovering inaccurate information. Upon this discovery the apprSrv voluntarily rolls back its state in order to reprocess the verified specifications. This in turn causes the mortgage information to be inconsistent with the information the buySrv has. As a result, the buySrv must also be caused to rollback due to this invalidated dependency where it may choose to renegotiate the sale price. Figure 5 depicts this failure scenario.
Figures 6, 7, 8, 9, and 10 show our implementation of this example written in our proposed language syntax. The on-site verification process service implementation (verifySrv) and credit database service implementation (creditDB) along with unimportant code segments are omitted due to paper length restrictions. searchSrv, verifySrv, creditDB are implemented as proxies because they only provide access to a resource in order to obtain information and thus will not have an effect on the global dependency. This implementation also allows other types of failures to occur such as an offer rejection and mortgage denial. We make use of our Consistent Distributed State Protocol and Ping Director in this example to manage the house purchase transaction to notify all participants of a failure or issue a global checkpoint so we arrive at a globally consistent state. This transaction is started by the following call by an outside agent:
```java
Transactor[] participants = {<buySrv>, <sellSrv>,
<apprSrv>, <lendSrv>, <searchSrv>,
<verifySrv>, <creditDB>};
startCDSUpdate(participants, <buySrv>,
"newHousePurchase", <houseid>);
```
An important observation can be made from this example highlighting how the transactor model tracks fine-grained dependencies. Though the use of the CDSP promotes atomicity of a transaction, its primary purpose is to guarantee consistency, as the name suggests. The atomicity aspect of the CDSP and the transactor model only applies to participants who are strictly invalidated by a dependency on a failed component. In that regard, other participants, such as the archSrv who remains independent throughout the transaction, will not rollback even if another participant encounters failure. This key feature separates the transactor model from other traditional transaction methodologies that have an "all or nothing” approach. Like the archSrv, any participant who is semantically not affected by the overall result of the transaction will not have its operations reverted. This offers benefits in terms of preventing unnecessary rollbacks and not having to redo the same task if the transaction is attempted again allowing it to reuse results without having to recompute them. The archSrv is a highly simplified implementation of an actual title search service that would involve a much more complex process. This process locates the required information for the title to the house, and this result would have to be re-computed if the search service
were to rollback. If the overall transaction does fail and is reattempted, that title information will still be persistent, allowing us to reuse resources. The fact that the `srchSrv` is implemented as a proxy also ensures us that it has no effect on the global dependency of the transaction and will not incur any upon itself. Similarly, the `verifySrv` and `creditDB` both being proxies have no effect on the rest of the transaction and will not be caused to rollback.
```java
behavior lendSrv {
buySrv buyer;
String house;
int price = 0;
creditDB creditAgency;
...
void reqMortgage(String houseId, buySrv buyr, int reqPrice, apprSrv appraiser, creditDB creditHistory) {
house := houseId;
price := reqPrice;
buyer := buyr;
creditAgency := creditHistory;
appraiser<~reqPrice(self);
}
}
```
```java
void appraisal(int newPrice) {
price := newPrice;
~creditAgency<~getCreditApproval(~house, ~buyer, ~price, self);
}
```
```java
void approvalResp(String approvalid) {
if (approvalid != null) {
stabilize;
~buyer<~mortgageApproval(approvalid);
} else {
~buyer<~mortgageDeny();
rollback;
}
}
```
Figure 6: lendSrv implementation
9. RELATED WORK
Though there already exists previous work that aims to support distributed state, ours is the first that provides a working implementation of the transactor model. Other types of systems include Liskov’s Argus [8] programming language. Argus provides an abstraction known as a guardian that is very much akin to a SALSA actor. Like an actor, guardians are meant to encapsulate a resource and permit access to its resources through handlers. Fault tolerance in Argus is provided with stable objects implemented as atomic objects, which allocate access through the use of locks to resolve concurrency. Similar to a transactor persistent and volatile state, atomic objects use versioning to handle recovery from failures. Unlike transactors, Argus does not directly track dependencies and takes an “all or nothing” approach to determining if a set of operations should be committed.
Another system is Atamos [4], introduced by Carlstrom et al. to be a transactional programming language with implicit transactions, strong atomicity, and scalable multiprocessor implementation. Atamos relies on the transactional memory model, which executes read and write instructions in an atomic way. Unlike Argus, but comparable to transactors, Atamos provides open nested transactions, which immediately commit child transactions at completion. Like transactors where independent agents of a failed transaction can still checkpoint, the rollback of a parent transaction is independent from completed open nested transactions.
Stabilizers [12] introduced by Ziarek et al. is a linguistic abstraction that models transient failures in concurrent threads with shared memory. These abstractions enforce global consistency by monitoring thread interactions to compute the transitive closure of dependencies. Like transactors, any non-local action such as thread communication or thread creation constitutes state dependency; however, these dependencies are recorded even if there is no state mutation. In the presence of transient failure, rollbacks are performed that revert state to a point immediately preceding some non-local action which become implicit checkpoints. Unlike transactors, there is no predefined concrete checkpoint to rollback to since stabilizers perform thread monitoring instead of state captures.
10. DISCUSSION AND FUTURE WORK
Traditionally transactions have been modeled under object-oriented paradigms with concurrent threads that interact through shared memory. As a result, maintaining the integrity of a transaction has largely relied on issuing locks on objects to prevent race conditions. However the biggest problem with such techniques is the possibility of deadlock causing it to be relatively difficult to compose transactional programs correctly. The message passing and state encapsulating nature of actors allows them to naturally model atomicity and isolation of message execution, thereby eliminating the need for object-level locks. The semantics of the transactor model provides a much more clean and robust building block to model transactions.
A reliable transaction is commonly defined by its ACID
behavior buySrv {
searchSrv searcher;
apprSrv appraiser;
sellSrv seller;
lendSrv lender;
verifySrv verifier;
creditDB creditHistory;
int price = 0;
String title, mortgage, houseid;
void newHousePurchase(String newHouseId) {
houseid := newHouseId;
~!appraiser<-reqAppraisal(~!houseid, self, ~!seller, ~!verifier);
}
void appraisal(int newPrice) {
price := newPrice;
~!seller<-offer(~!houseid, ~!price, self);
~!searcher<-reqSearch(~!houseid, self);
~!lender<-reqMortgage(~!houseid, self,
~!price, ~!appraiser, ~!creditHistory);
}
void titleResp(String newTitle) {
title := newTitle;
}
void mortgageApproval(String approvalid) {
mortgage := approvalid;
}
void close() {
if (~!title != null && ~!mortgage != null) {
stabilize;
endCDSUpdate;
} else {
self<-close();
}
}
void rejectOffer() {
endCDSUpdate;
rollback;
}
void mortgageDeny() {
endCDSUpdate;
rollback;
}
}
Figure 8: buySrv implementation
behavior apprSrv {
String house, specs;
int price = 0;
buySrv buyer;
Transactor requester;
verifySrv verifier;
void reqAppraisal(String houseid, buySrv buyr,
sellSrv seller, verifySrv verifr) {
buyer := buyr;
house := houseid;
verifier := verifr;
seller<-reqSpecs(~!house, self);
}
void specsResp(String newSpecs, int newPrice){
specs := newSpecs;
price := newPrice;
~!buyer<-appraisal(newPrice);
}
void reqPrice(Transactor customer) {
requester := customer;
~!verifier<-verifySpecs(~!house, ~!specs, self);
}
void verify(boolean ok, int verifiedPrice) {
if (ok) {
stabilize;
~!requester<-appraisal(verifiedPrice);
} else {
~!requester<-appraisal(verifiedPrice);
rollback;
}
}
}
Figure 10: apprSrv implementation
behavior proxy searchSrv {
HashMap titlesDB;
void reqSearch(String houseId, Transactor customer) {
customer<-titleResp(~!titlesDB.get(houseId));
}
}
Figure 9: srchSrv implementation
properties. While the transactor model only guarantees consistency and durability, transactors break down a transaction into its fundamental elements. Atomicity and isolation can be coded into the model if desired, however as shown in our example, transactors provide a looser form of atomicity that we call selective rollback. This means that we only undo what is known to be inconsistent. Full isolation is also not a strict requirement for transactor programs as stated in one of the preconditions of the CDSP that requires that obtaining new dependencies on outside transactors not be allowed. We refer to this as selective state access. State accesses that create backward dependencies are perfectly legal since it does not prevent the participating transactor from checkpointing. Therefore, lack of full ACID properties is a design feature allowing for the creation of lightweight and modular transactions.
Though our language is currently a working implementation of the transactor model, it is still in a developmental stage and leaves much work to be done. As a consequence of its development it also opens up new directions in the study of transactors. Our next objective would be to develop a compiler similar to the SALSA preprocessor to produce Java code that can be compiled and run on a JVM. This compiler would greatly simplify writing transactor programs with the proposed syntax, which inherits much of the familiar SALSA and Java grammar.
Another key future goal is implementing node failure semantics. Following the transition rules of the transactor model, a transactor system needs to be able to recognize node failures and reload transactors from persistent storage. A record of previously running transactors on the node would be required, perhaps as an extension of the naming service. The program would then proceed normally as if a rollback has occurred. This also opens up concerns on how to bootstrap programs and restart the network of messages. Along with bootstrapping programs there is an open question of whether to initially checkpoint the startup transactor to prevent total program annihilation if the startup node fails before it becomes persistent.
An improvement can also be made to the CDSP to guarantee full isolation among participants of a given CDS update to satisfy one of its preconditions. One possible technique is to apply a two-phase CDS update initialization protocol similar to the two-phase commit protocol. The necessity of a two-phase process is due to the message passing nature of transactors where there is no guarantee of when messages will arrive or even be received. Such a protocol could involve the use of synchronization constraints such as Synchronizers [6] that handle message dispatching to disable messages arriving from outside transactors. Achieving isolation would be valuable so the user would only have to reason about the specifics of a CDS update rather than consider its reliability.
A future direction to the study of transactors is modeling migration. SALSA has built in support for actor migration and our transactor language allows transactors to be initialized in different SALSA theaters. However, there are concerns over whether location is represented by a transactor’s state where an implementation would have to perform reverse migrations should a transactor ever rollback. Migration also becomes a factor in implementing node failure where each node would have to track which transactors would have to be recovered. Transactor USL was developed to permit the possibility of mobile transactors so persistent state storage would not become a limiting factor.
Lastly, interaction between transactors can be simplified by implementing continuations. Currently, in order to retrieve information from another transactor, the sender’s name needs to be passed along with the message so the recipient knows where to send a reply. Continuations would make it easier to compose transactor programs by emulating serialized execution among asynchronous transactors. SALSA provides this in the form of tokens. However, research needs to be done to consider how to model tokens in tau calculus under the transactor model so that dependencies can be applied correctly.
11. REFERENCES
|
{"Source-Url": "http://soft.vub.ac.be/AGERE14/papers/ageresplash2014_submission_6.pdf", "len_cl100k_base": 9292, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32430, "total-output-tokens": 10618, "length": "2e13", "weborganizer": {"__label__adult": 0.0003750324249267578, "__label__art_design": 0.00025582313537597656, "__label__crime_law": 0.0003323554992675781, "__label__education_jobs": 0.00046944618225097656, "__label__entertainment": 5.8531761169433594e-05, "__label__fashion_beauty": 0.00014388561248779297, "__label__finance_business": 0.0002162456512451172, "__label__food_dining": 0.0003504753112792969, "__label__games": 0.0005021095275878906, "__label__hardware": 0.0008153915405273438, "__label__health": 0.0005369186401367188, "__label__history": 0.0002267360687255859, "__label__home_hobbies": 7.325410842895508e-05, "__label__industrial": 0.0003437995910644531, "__label__literature": 0.00024890899658203125, "__label__politics": 0.00028586387634277344, "__label__religion": 0.0004758834838867187, "__label__science_tech": 0.01421356201171875, "__label__social_life": 7.486343383789062e-05, "__label__software": 0.003751754760742187, "__label__software_dev": 0.97509765625, "__label__sports_fitness": 0.00028014183044433594, "__label__transportation": 0.0005869865417480469, "__label__travel": 0.0002135038375854492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48026, 0.00803]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48026, 0.28891]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48026, 0.88614]], "google_gemma-3-12b-it_contains_pii": [[0, 4625, false], [4625, 10003, null], [10003, 15548, null], [15548, 21669, null], [21669, 25705, null], [25705, 30361, null], [30361, 35368, null], [35368, 39764, null], [39764, 41979, null], [41979, 48026, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4625, true], [4625, 10003, null], [10003, 15548, null], [15548, 21669, null], [21669, 25705, null], [25705, 30361, null], [30361, 35368, null], [35368, 39764, null], [39764, 41979, null], [41979, 48026, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48026, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48026, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48026, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48026, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48026, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48026, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48026, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48026, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48026, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48026, null]], "pdf_page_numbers": [[0, 4625, 1], [4625, 10003, 2], [10003, 15548, 3], [15548, 21669, 4], [21669, 25705, 5], [25705, 30361, 6], [30361, 35368, 7], [35368, 39764, 8], [39764, 41979, 9], [41979, 48026, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48026, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
106137a364afd7b68d377af2743820467805b958
|
Well-founded Recursion over Contextual Objects
Brigitte Pientka¹ and Andreas Abel²
¹ School of Computer Science
McGill University, Montreal, Canada
bpientka@cs.mcgill.ca
² Department of Computer Science and Engineering, Gothenburg University
Göteborg, Sweden
andreas.abel@gu.se
Abstract
We present a core programming language that supports writing well-founded structurally recursive functions using simultaneous pattern matching on contextual LF objects and contexts. The main technical tool is a coverage checking algorithm that also generates valid recursive calls. To establish consistency, we define a call-by-value small-step semantics and prove that every well-typed program terminates using a reducibility semantics. Based on the presented methodology we have implemented a totality checker as part of the programming and proof environment Beluga where it can be used to establish that a total Beluga program corresponds to a proof.
1998 ACM Subject Classification
D.3.1[Programming Languages]: Formal Definitions and Languages. F.3.1[Logics and Meaning of Programs]: Specifying and Verifying and Reasoning about Programs
Keywords and phrases Type systems, Dependent Types, Logical Frameworks
Introduction
Mechanizing formal systems and their proofs play an important role in establishing trust in formal developments. A key question in this endeavor is how to represent variables and assumptions to which the logical framework LF [8], a dependently typed lambda-calculus, provides an elegant and simple answer: both can be represented uniformly using LF’s function space, modelling binders in the object language using binders in LF. This kind of encoding is typically referred to as higher-order abstract syntax (HOAS) and provides a general uniform treatment of syntax, rules and proofs.
While the elegance of higher-order abstract syntax encodings is widely acknowledged, it has been challenging to reason inductively about LF specifications and formulate well-founded recursion principles. HOAS specifications are not inductive in the standard sense. As we recursively traverse higher-order abstract syntax trees, we extend our context of assumptions, and our LF object does not remain closed. To tackle this problem, Pientka and collaborators [11, 4] propose to pair LF objects together with the context in which they are meaningful. This notion is then internalized as a contextual type \([\Psi.A]\) which is inhabited by terms \(M\) of type \(A\) in the context \(\Psi\) [9]. Contextual objects are then embedded into a computation language which supports general recursion and pattern matching on contexts and contextual objects. Beluga, a programming environment based on these ideas [13], facilitates the use of HOAS for non-trivial applications such as normalization-by-evaluation [4] and a type-preserving compiler including closure conversion and hoisting [3]. However, nothing in this work enforces or guarantees that a given program is total.
In this paper, we develop a core functional language for reasoning inductively about context and contextual objects. One can think of this core language as the target of a Beluga program: elaboration may use type reconstruction to infer implicit indices [6] and generate valid well-founded recursive calls that can be made in the body of the function. Type checking will guarantee that we are manipulating well-typed objects and, in addition, that a given set of cases is covering and the given recursive calls are well-founded. To establish consistency, we define a call-by-value small-step semantics for our core language and prove that every well-typed program terminates, using Tait’s method of logical relations. Thus, we justify the interpretation of well-founded recursive programs in our core language as inductive proofs. Based on our theoretical work, we have implemented a totality checker for Beluga.
Our approach is however more general: our core language can be viewed as a language for first-order logic proofs by structural induction over a given domain. The domain must only provide answers to three domain-specific questions: (1) how to unify objects in the domain, (2) how to split on a domain object and (3) how to justify that a domain object is smaller according to some measure. The answer to the first and second question allows us to justify that the given program is covering, while the third allows us to guarantee termination. For the domain of contextual LF presented in this paper, we rely on higher-order unification [2] for (1), and our splitting algorithm (2) and subterm ordering (3) builds on previous work [5, 10]. As a consequence, our work highlights that reasoning about HOAS representations via contextual types can be easily accommodated in a first-order theory. In fact, it is a rather straightforward extension of how we reason inductively about simple domains such as natural numbers or lists.
The remainder of the paper is organized as follows. We first present the general idea of writing and verifying programs to be total in Sec. 2 and then describe in more detail the foundation of our core programming language which includes well-founded recursion principles and simultaneous pattern matching in Sec. 3. The operational semantics together with basic properties such as type safety is given in Sec. 4. In Sec. 5, we review contextual LF [4], define a well-founded measure on contextual objects and contexts, and define the splitting algorithm. Subsequently we describe the generation of valid well-founded recursive calls generically, and prove normalization (Sec. 7). We conclude with a discussion of related work, current status and future research directions. Due to space constraints, proofs have been omitted.
2 General Idea
2.1 Example 1: Equality on Natural Numbers
To explain the basic idea of how we write inductive proofs as recursive programs, we consider first a very simple example: reasoning about structural equality on natural numbers (see Listing 1). We encode natural numbers and equality on them in the logical framework LF.
Listing 1 Encoding of an Inductive Proof as a Recursive Function
```
at : type. eq : nat → nat → type.
z : nat. eq_z : eq z z.
s : nat → nat. eq_s : eq n n → eq (s n) (s n).
ref : Π M:nat. [eq M M] = Λ M ⇒ rec-case M of
| ref z ⇒ [eq_z]
| M':nat ; ref M':[eq M' M']. ref (s M') ⇒ let D = ref M' in [eq_s M' M' D];
```
Well-founded Recursion over Contextual Objects
The free variables $M$ and $N$ in the definition of $eq_s$ are implicitly quantified at the outside. Program $ref$ proves reflexivity of $eq$: for all $M:nat$ we can derive $eq M M$. Following type-theoretic notation, we write $\Pi$ for universal quantification; we embed LF objects which denote base predicates via $\square$. Abstraction over LF object $M$ is written $\Lambda M \Rightarrow$ in our language.
Using $rec$-case, we prove inductively that for all $M$ there is a derivation for $(eq M M)$. There are two cases to consider: $ref z$ describes the base case where $M$ is zero and the goal refines to $(eq z z)$. In this case, the proof is simply $(eq_z)$. In the step case, written as $ref (s M')$, we also list explicitly the other assumptions: the type of $M'$ and the induction hypothesis written as $ref M':[eq M' M']$. To establish that $(eq (s M') (s M'))$, we first obtain a derivation $D$ of $eq M' M'$ by induction hypothesis and then extend it to a derivation $[eq_s M' M' D]$ of $[eq (s M') (s M')]$. We highlight in green redundant information which can be inferred automatically. In the pattern, it is the typing (here: $M':nat$) of the pattern variables $\{12, 6\}$ and the listing of the induction hypotheses. The dot “.” separates these assumptions from the main pattern. For clarity, we choose to write the pattern as a simultaneous pattern match and make the name of the function explicit; in practice, we only write the main pattern which is left in black, and all other arguments are inferred.
2.2 Example 2: Intrinsically Typed Terms
Next, we encode intrinsically typed $\lambda$-terms. This example does exploit the power of LF.
We define base types such as $bool$ and function types, written as $arr T S$, and represent simply-typed lambda-terms using the constructors $lam$ and $app$. In particular, we model the binding in the lambda-calculus (our object language) via HOAS, using the LF function space. For example, the identity function is represented as $lam \lambda x.x$ and function composition as $lam \lambda g. lam \lambda f. lam \lambda x. app (f (app g x))$. As we traverse $\lambda$-abstractions we record the variables we are encountering in a context $\phi$: $cxt$. Its shape is given by a schema declaration $schema \phi = tm A$ stating that it contains only variable bindings of type $tm A$ for some $A$. To reason about typing derivations, we package the term (or type) together with its context, forming a contextual object (or contextual type, resp.). For example, we write $\phi \vdash tm A$ for an object of type $tm A$ in the context $\phi$. Such contextual types are embedded into logical statements as $[\phi \vdash tm A]$. When the context $\phi$ is empty, we may drop the turnstile and simply write $[tm A]$.
Counting constructors: Induction on (contextual) LF object
As an example, we consider counting constructors in a term. This corresponds to defining the overall size of a typing derivation. We recursively analyze terms $M$ of type $tm A$ in the context $\phi$. In the variable case, written as $count \phi B (\phi \vdash p ...)$, we simply return zero. The pattern variable $p$ stands for a variable from the context $\phi$. We explicitly associate it with the identity substitution, written as $...$, to use $p$ which has declared type $\phi \vdash tm B$ in the context $\phi$. Not writing the identity substitution would enforce that the pattern variable $p$ does not depend on $\phi$ and forces the type of $p$ to be $[\vdash tm B]$. While it is certainly legitimate to use $p$ in the context $\phi$, since the empty substitution maps variables from the empty context to $\phi$, the type of $p$ is empty; since the context is empty, there are no variables of the type $[\vdash tm B]$. Hence writing $(\phi \vdash p)$ would describe an empty pattern. In contrast, types described by
Listing 2 Counting constructors
\[
\begin{aligned}
count \colon & \prod \phi \colon \text{ctx}. \prod A \colon \text{tp}. \prod M \colon (\phi \vdash \text{tm} A) \cdot \text{nat} = \\
& (\lambda \phi \cdot (\lambda A \cdot (\lambda M \cdot (\phi \vdash \text{tm} A) \cdot \text{nat}))) \\
& (\lambda B \cdot (\lambda p \cdot (\phi \vdash p \cdots) \Rightarrow [z])) \quad \% \text{Variable Case} \\
& (\lambda B \cdot (\lambda C \cdot (\lambda M \cdot (\phi, x \colon \text{tm} B) \cdot \text{nat}))) \quad \% \text{Abstraction Case} \\
& \text{count} (\phi, x \colon \text{tm} B) C (\phi, x \colon \text{tm} B \vdash M \cdots x) : [\text{nat}] \quad \% \text{IH} \\
& \text{count} (\phi, x \colon \text{tm} B) C (\phi, x \colon \text{tm} B \vdash M \cdots x) : [\text{nat}] \quad \% \text{IH} \\
& \text{let } X = \text{count} (\phi, x \colon \text{tm} B) C (\phi, x \colon \text{tm} B \vdash M \cdots x) \text{ in } [s X] \\
\end{aligned}
\]
Listing 3 Computing length of a context
\[
\begin{aligned}
\text{length} \colon & \prod \phi \colon \text{ctx}. [[\text{nat}]] = \\
& (\lambda \phi \cdot \text{rec-case } \phi \cdot [\text{nat}]) \\
& (\lambda \psi \colon \text{ctx} \cdot A \colon \text{tp} \cdot \text{count } \psi \colon [[\text{nat}]] \cdot \text{count} (\psi, x \colon \text{tm} A) \Rightarrow \text{let } X = \text{count } \psi \text{ in } [s X])
\end{aligned}
\]
meta-variables A or B, for example, are always closed and can be instantiated with any closed object of type tp and we do not associate them with an identity substitution.
In the case for lambda-abstractions, \(\text{count } \phi (\text{arr } B \ C) (\phi \vdash \lambda x. M \cdots x)\), we not only list the type of each of the variables occurring in the pattern, but also the induction hypothesis, \(\text{count} (\phi, x \colon \text{tm} B) C (\phi, x \colon \text{tm} B \vdash M \cdots x) : [\text{nat}]\). Although the context grows, the term itself is smaller. In the body of the case, we use the induction hypothesis to determine the size \(X\) of \(M \cdots x\) in the context \(\phi, x \colon \text{tm} B\) and then increment it.
The case for application, \(\text{count } \phi C (\phi \vdash \text{app } B \ C (M \cdots) (N \cdots))\), is similar. We again list all the types of variables occurring in the pattern as well as the two induction hypotheses. In the body, we determine the size \(X\) of \(\phi \vdash M \cdots\) and the size \(Y\) of \(\phi \vdash N \cdots\) and then add them.
Computing the length of a context: Induction on the context
As we have the power to abstract and manipulate contexts as first-class objects, we also can reason inductively about them. Contexts are similar to lists and we distinguish between the empty context, written here as \(\emptyset\), and a context consisting of at least one element, written as \(\psi, x \colon \text{tm} A\). In the latter case, we can appeal to the induction hypothesis on \(\psi\) (see Listing 3).
3 Core language with well-founded recursion
In this section, we present the core of Beluga’s computational language which allows the manipulation of contextual LF objects by means of higher-order functions and primitive recursion. In our presentation of the computation language we keep however our domain abstract simply referring to \(U\), the type of a domain object, and \(C\), the object of a given domain. In fact, our computational language is parametric in the actual domain. To guarantee totality of a program, the domain needs to provide answers for two main questions: 1) how to split on a domain type \(U\) and 2) how to determine whether a domain object \(C\) is smaller according to some domain-specific measure. We also need to know how to unify two terms and determine when two terms in our domain are equal. In terms of proof-theoretical strength, the language is comparable to Gödel’s T or Heyting Arithmetic where the objects
of study are natural numbers. However in our case, \( U \) will stand for a (contextual) LF type and \( C \) describes a (contextual) LF object.
\[
\begin{align*}
\text{Types} & : \quad \mathcal{I}, \tau ::= [U] \mid \tau_1 \to \tau_2 \mid \Pi \chi : \tau. \\
\text{Expressions} & : \quad e ::= y \mid [C] \mid \text{fn } y : \tau \Rightarrow e \mid e_1 \ e_2 \mid \Lambda \chi : U. \Rightarrow e \mid e \ C \\
& \quad \mid \text{let } X = e_1 \ in \ e_2 \mid \text{rec-case}^\tau \ C \text{ of } \overrightarrow{b} \\
\text{Branches} & : \quad b ::= \Delta; \overrightarrow{r} . \ r \Rightarrow e \\
\text{Assumptions} & : \quad r ::= f \ C \ C \\
\text{Contexts} & : \quad \Gamma ::= \cdot \mid \Gamma, Y : \tau \mid \Gamma, \tau : \tau \\
\text{Meta Context} & : \quad \Delta ::= \cdot \mid \Delta, X : U
\end{align*}
\]
We distinguish between computation variables, simply referred to as variables and written using lower-case letter \( y \); variables that are bound by \( \Pi \)-types and \( \Lambda \)-abstraction are referred to as meta-variables and written using upper-case letter \( X \). Meta-variables occur inside a domain object. For example we saw earlier the object \( (\psi \vdash \text{app} \ B \ C \ (M_{\ldots}) (N_{\ldots})) \). Here, \( \psi, B, C, \ H, \) and \( \Pi \) are referred to as meta-variables.
There are three forms of computation-level types \( \tau \). The base type \([U]\) is introduced by wrapping a contextual object \( C \) inside a box; an object of type \([U]\) is eliminated by a let-expression effectively unboxing a domain object. The non-dependent function space \( \tau_1 \to \tau_2 \) is introduced by function abstraction \( \text{fn } y : \tau_1 \Rightarrow e \) and eliminated by application \( e_1 \ e_2 \); finally, the dependent function type \( \Pi \chi : U. \tau \) which corresponds to universal quantification in predicate logic is introduced by abstraction \( \Lambda \chi : U. \Rightarrow e \) over meta-variables \( X \) and eliminated by application to a meta objects \( C \) written as \( e \ C \). The type annotations on both abstractions ensure that every expression has a unique type. Note that we can index computation-level types \( \tau \) only by meta objects (but this includes LF contexts!), not by arbitrary computation-level objects. Thus, the resulting logic is just first-order, although the proofs we can write correspond to higher-order functional programs manipulating HOAS objects.
Our language supports pattern matching on a meta-object \( C \) using \text{rec-case}\-expressions. Note that one cannot match on a computational object \( e \) directly; instead one can bind an expression of type \([U]\) to a meta variable \( X \) using \text{let} and then match on \( X \). We annotate the recursor \text{rec-case} with the type of the inductive invariant \( \Pi \Delta_0. \tau_0 \) which the recursion satisfies. Since we are working in a dependently-typed setting, it is not sufficient to simply state the type \( U \) of the scrutinee. Instead, we generalize over the index variables occurring in the scrutinee, since they may be refined during pattern matching. Hence, \( \Delta_0 = \Delta_1, X_0 : U_0 \) where \( \Delta_1 \) exactly describes the free meta-variables occurring in \( U_0 \). The intention is that we induct on objects of type \( U_0 \) which may depend on \( \Delta_1 \). \( \Delta_0 \) must therefore contain at least one declaration. We also give the return type \( \tau_0 \) of the recursor, since it might also depend on \( \Delta_0 \) and might be refined during pattern matching. This is analogous to Coq’s \text{match as in return with end} construct.
One might ask whether this form of inductive invariant is too restrictive, since it seems not to capture, e.g., \( \Pi \Delta_0. (\tau \to \Pi \chi : U. \tau') \). While allowing more general invariants does not pose any fundamental issues, we simply note here that the above type is isomorphic to \( \Pi (\Delta_0, X : U_0), \tau \to \tau' \) which is treated by our calculus. Forcing all quantifiers at the outside simplifies our theoretical development; however, our implementation is more flexible.
A branch \( b_i \) is expressed as \( \Delta_i; \overrightarrow{r_i} . \ r_{i0} \Rightarrow e_i \). As shown in the examples, we explicitly list all pattern variables (i.e. meta-variables) occurring in the pattern in \( \Delta_i \). In practice, they often can be inferred (see for example [12]). We also list all valid well-founded recursive calls \( \overrightarrow{r_i} \), i.e. \( r_{ik}, \ldots, r_{in} \), for pattern \( r_{i0} \). In practice, they can be also derived dynamically while we check that a given pattern \( r_{i0} \) is covering and we give an algorithm in Section 6.
To verify that the induction hypothesis is domain specific, we leave it abstract for now and return to it when we consider (contextual) set of patterns covering $\Pi \Delta_0 \sigma$. Finally, $\vec{b}$ must cover the meta-context $\Delta_0$, i.e., it must be a complete, non-redundant set of patterns covering $\Delta_0$, and all recursive calls are well-founded. Since the coverage check is domain specific, we leave it abstract for now and return to it when we consider (contextual) LF as one possible domain (see Sec. 5).
\[ \Delta; \Gamma \vdash e : \tau \] Computation $e$ has type $\tau$
\[ \Gamma(y) = \tau \quad \Delta; \Gamma \vdash y : \tau \]
\[ \Delta; \Gamma \vdash r' : \tau \]
\[ \Delta; \Gamma \vdash e_1 : \tau_1 \rightarrow \tau_2 \quad \Delta; \Gamma \vdash e_2 : \tau_2 \]
\[ \Delta; \Gamma \vdash e : \Pi X : U. \tau \]
\[ \Delta; \Gamma \vdash e : C : \left[ \left[ C/X \right] \right] \tau \]
\[ \Delta; \Gamma \vdash \text{fn } y ; \tau_1 \Rightarrow e : \tau_1 \rightarrow \tau_2 \]
\[ \Delta; \Gamma \vdash \text{AX}: U \Rightarrow e : \Pi \text{AX}: U. \tau \]
\[ \Delta; \Gamma \vdash \text{let } X = e_1 \text{ in } e_2 : \tau \]
\[ \Delta \vdash \text{rec-case}^\mathcal{E} C \text{ of } \vec{b} : \left[ \left[ \theta \right] C/X \right] \tau_0 \]
\[ b : \mathcal{I} \] Branch $b$ satisfies the invariant $\mathcal{I}$
\[ \text{for all } 0 \leq j \leq k . \Delta \vdash r_j : \tau_j \quad \Delta ; r_k ; r_{k \ldots r_1} ; r_0 \Rightarrow e : \mathcal{I} \]
\[ \Delta \vdash \vec{C} : \mathcal{I} \Rightarrow \tau' \] Assumption $r$/pattern spine $\vec{C}$ has type $\tau'$ given $\mathcal{I}$
\[ \Delta \vdash \vec{C} : \mathcal{I} \Rightarrow \tau' \]
\[ \Delta \vdash C : U \Rightarrow \vec{C} : \left[ \left[ C/X \right] \right] \tau > \tau' \]
\[ \Delta \vdash \text{II}: U. \tau > \left[ \left[ C/Y \right] \right] \tau \]
\[ \Delta \vdash \text{rec-case}^\mathcal{E} C \text{ of } \vec{b} : \left[ \left[ \theta \right] C/X \right] \tau_0 \]
\[ \Delta \vdash \text{rec-case}^\mathcal{E} C \text{ of } \vec{b} : \left[ \left[ \theta \right] C/X \right] \tau_0 \]
\[ \Delta \vdash \text{rec-case}^\mathcal{E} C \text{ of } \vec{b} : \left[ \left[ \theta \right] C/X \right] \tau_0 \]
\[ \Delta \vdash \text{rec-case}^\mathcal{E} C \text{ of } \vec{b} : \left[ \left[ \theta \right] C/X \right] \tau_0 \]
\[ \Delta \vdash \text{rec-case}^\mathcal{E} C \text{ of } \vec{b} : \left[ \left[ \theta \right] C/X \right] \tau_0 \]
\[ \Delta \vdash \text{rec-case}^\mathcal{E} C \text{ of } \vec{b} : \left[ \left[ \theta \right] C/X \right] \tau_0 \]
\[ \Delta \vdash \text{rec-case}^\mathcal{E} C \text{ of } \vec{b} : \left[ \left[ \theta \right] C/X \right] \tau_0 \]
\[ \Delta \vdash \text{rec-case}^\mathcal{E} C \text{ of } \vec{b} : \left[ \left[ \theta \right] C/X \right] \tau_0 \]
\[ \Delta \vdash \text{rec-case}^\mathcal{E} C \text{ of } \vec{b} : \left[ \left[ \theta \right] C/X \right] \tau_0 \]
Well-founded Recursion over Contextual Objects
\[(\text{fn } x : \tau \Rightarrow e) \mapsto [v/x]e \quad (\Lambda X : U \Rightarrow e) \mapsto [C/X]e\]
\[\exists \text{ unique } (\Delta_{r_k}, \ldots, r_1, r_0 \Rightarrow e) \in \tilde{b} \text{ where } r_j = f \tilde{C}_j C_{j0}\]
\[\text{rec-case}^\tau C \text{ of } \tilde{b} \rightarrow [\theta][\text{rec-case}^\tau C_{j0} \text{ of } \tilde{b}]/r_k, \ldots, \text{rec-case}^\tau C_{i0} \text{ of } \tilde{b}]/r_1 \mapsto e\]
\[\text{Figure 2 Small-step semantics } e \mapsto e'\]
Note that we drop the meta-context \(\Delta\) and the computation context \(\Gamma\) when we proceed to check that all branches satisfy the specified invariant. Dropping \(\Delta\) is fine, since we require the invariant \(\Pi\Delta_0,\tau_0\) to be closed. One might object to dropping \(\Gamma\); indeed this could be generalized to keeping those assumptions from \(\Gamma\) which do not depend on \(\Delta\) and generalizing the allowed type of inductive invariant (see our earlier remark).
For a branch \(b = \Delta ; \tau, r_0 \Rightarrow e\) to be well-typed with respect to a given invariant \(I\), we check the call pattern \(r_0\) and each recursive call \(r_j\) against the invariant and synthesize target types \(\tau_j\) (\(j \geq 0\)). We then continue checking the body \(e\) against \(\tau_0\), i.e., the target type of the call pattern \(r_0\), populating the computation context with the recursive calls \(\tilde{r}\) at their types \(\tilde{\tau}\). A pattern \(r\) recursive call \(r_j = \tilde{r}_j\) intuitively corresponds to the given inductive invariant \(I = \Pi\Delta_1,\Pi X_0 : U_0, \tau_0\), if the spine \(\tilde{C}\) matches the specified types in \(\Delta_1, X_0 : U_0\) and it has intuitively the type \([\tilde{C}_0/X_0, \ldots, C_{j0}/X_{j0}]\tau_0\) which we denote with \(\tau_j\).
More generally, we write \(\Delta \vdash \theta : \Delta_0\) for a well-typed simultaneous substitution where \(\Delta_0\) is the domain and \(\Delta\) is the range of the substitution. It can be inductively defined (see below) and the standard substitution lemmas hold (see for example [4]).
\[
\frac{
\Delta \vdash \cdot : \Delta_0 \quad \Delta \vdash C : [\theta]U
}{
\Delta \vdash \theta, C/X : \Delta_0, X : U
}\]
4 Operational Semantics
Fig. 2 specifies the call-by-value (cbv) one-step reduction relation \(e \mapsto e'\); we have omitted the usual congruence rules for cbv. Reduction is deterministic and does not get stuck on closed terms, due to completeness of pattern matching in \(\text{rec-case}\). To reduce \(\text{rec-case}^\tau C\) of \(\tilde{b}\) we find the branch \((\Delta_{r_k}, \ldots, r_1, r_0 \Rightarrow e) \in \tilde{b}\) such that the principal argument \(C_{j0}\) of its clause head \(r_0 = f \tilde{C}_0 f_{C_{j0}}\) of \(\tilde{b}\) such that \(f\) \(\tilde{C}_0\) matches \(C\) under meta substitution \(\theta\). The reduct is the body \(e\) under \(\theta\) where we additionally replace each place holder \(r_j\) of a recursive call by the actual recursive invocation \((\text{rec-case}^\theta [\theta]C_{j0})\). The object \(C_{j0}\) in fact just denotes the meta-variable on which we are recursing. We also apply \(\theta\) to the body \(e\). In the rule, we have lifted out \(\theta\). Values \(v\) in our language are boxed meta objects \([C]\), functions \(\text{fn } x : \tau \Rightarrow e\), and \(\Lambda X : U \Rightarrow e\).
1. **Theorem 1** (Subject reduction). If \(\vdash \cdot : \tau\) and \(e \mapsto e'\), then \(\vdash \cdot : e' : \tau\).
**Proof.** By induction on \(e \mapsto e'\).
2. **Theorem 2** (Progress). If \(\vdash \cdot : \tau\) then either \(e\) is a value or \(e \mapsto e'\).
**Proof.** By induction on \(\vdash \cdot : \tau\).
5 Contextual LF: Background, Measure, Splitting
If we choose as our domain natural numbers or lists, it may be obvious how to define splitting together with a measure that describes when an object is smaller. Our interest however is to use the contextual logical framework LF [9] as a general domain language. Contextual LF extends the logical framework LF [8] by packaging an LF objects \( M \) of type \( A \) together with the context \( \Psi \) in which it is meaningful. This allows us to represent rich syntactic structures such as open terms and derivation trees that depend on hypotheses. The core language introduced in Sec. 3 then allows us to implement well-founded recursive programs over these rich abstract syntax trees that correspond to proofs by structural induction.
5.1 Contextual LF
We briefly review contextual LF here. As usual we consider only objects in \( \eta \)-long \( \beta \)-normal form, since these are the only meaningful objects in LF. Further, we concentrate on characterizing well-typed terms; spelling out kinds and kinding rules for types is straightforward.
| LF Base Types | \( P, Q \) \( ::= \ c \cdot S \) |
| LF Types | \( A, B \) \( ::= \ P \mid \Pi x:A.B \) |
| Heads | \( H \) \( ::= \ c \mid x \mid p[\sigma] \) |
| Neutral Terms | \( R \) \( ::= \ H \cdot S \mid u[\sigma] \) |
| Spines | \( S \) \( ::= \ \text{nil} \mid M \ S \) |
| Normal Terms | \( M, N \) \( ::= \ R \mid \lambda x.M \) |
| Substitutions | \( \sigma \) \( ::= \ \cdot \mid \text{id}_\Psi \mid \sigma, M \mid \sigma; H \) |
| Variable Substitutions | \( \pi \) \( ::= \ \cdot \mid \text{id}_\Psi \mid \sigma; x \) |
| LF Contexts | \( \Psi, \Phi \) \( ::= \ \cdot \mid \psi \mid \Psi, x:A \) |
Normal terms are either lambda-abstractions or neutral terms which are defined using a spine representation to give us direct access to the head of a neutral term. Normal objects may contain ordinary bound variables \( x \) which are used to represent object-level binders and are bound by \( \lambda \)-abstraction or in a context \( \Psi \). Contextual LF extends LF by allowing two kinds of contextual variables: the meta-variable \( u \) has type \( (\Psi.P) \) and stands for a general LF object that has type \( P \) and may use the variables declared in \( \Psi \); the parameter variable \( p \) has type \( \#(\Psi.A) \) and stands for an LF variable object of type \( A \) in the context \( \Psi \).
Contextual variables are associated with a postponed substitution \( \sigma \) which is applied as soon as we instantiate it. More precisely, a meta-variable \( u \) stands for a contextual object \( \hat{\Psi}.R \) where \( \hat{\Psi} \) describes the ordinary bound variables which may occur in \( R \). This allows us to rename the free variables occurring in \( R \) when necessary. The parameter variable \( p \) stands for a contextual object \( \hat{\Psi}.H \) where \( H \) must be either an ordinary bound variable from \( \hat{\Psi} \) or another parameter variable.
In the simultaneous substitutions \( \sigma \), we do not make the domain explicit. Rather we think of a substitution together with its domain \( \Psi \) and the \( i \)-th element in \( \sigma \) corresponds to the \( i \)-th declaration in \( \Psi \). We have two different ways of building a substitution entry: either by using a normal term \( M \) or a variable \( x \). Note that a variable \( x \) is only a normal term \( M \) if it is of base type. However, as we push a substitution \( \sigma \) through a \( \lambda \)-abstraction \( \lambda x.M \), we need to extend \( \sigma \) with \( x \). The resulting substitution \( \sigma_x \) may not be well-formed, since \( x \) may not be of base type and in fact we do not know its type. Hence, we allow substitutions not only to be extended with normal terms \( M \) but also with variables \( x \); in the latter case we
write $\sigma;x$. Expression $\text{id}_\psi$ denotes the identity substitution with domain $\psi$ while $\cdot$ describes the empty substitution.
Application of a substitution $\sigma$ to an LF normal form $B$, written as $\lfloor \sigma \rfloor B$, is hereditary [19] and produces in turn a normal form by removing generated redexes on the fly, possibly triggering further hereditary substitutions.
An LF context $\Psi$ is either a list of bound variable declarations $\overline{x : A}$ or a context variable $\psi$. We write $\Psi^0$ for contexts that do not start with a context variable. We write $\Psi, \Phi^0$ or sometimes $\Psi, \Phi$ for the extension of context $\Psi$ by the variable declarations of $\Phi^0$ or $\Phi$, resp. The operation $\text{id}(\Psi)$ that generates an identity substitution for a given context $\Psi$ is defined inductively as follows: $\text{id}(\cdot) = \cdot$, $\text{id}(\psi) = \text{id}_\psi$, and $\text{id}(\Psi;x:A) = \text{id}(\Psi);x$.
We summarize the bi-directional type system for contextual LF in Fig. 3. LF objects may depend on variables declared in the context $\Psi$ and a fixed meta-context $\Delta$ which contains contextual variables such as meta-variables $u$, parameter variables $p$, and context variables $\psi$. All typing judgments have access to both contexts and a fixed well-typed signature $\Sigma$ where we store constants $c$ together with their types and kinds.
### 5.2 Meta-level Terms and Typing Rules
We lift contextual LF objects to meta-objects to have a uniform definition of all meta-objects. We also define context schemas $G$ that classify contexts.
<table>
<thead>
<tr>
<th>Context Schemas $G$</th>
<th>$::= \exists \Phi^0.B \mid G + \exists \Phi^0.B$</th>
</tr>
</thead>
<tbody>
<tr>
<td>Meta Types</td>
<td>$U,V ::= \Psi.P \mid G \mid # \Psi.A$</td>
</tr>
<tr>
<td>Meta Objects</td>
<td>$C,D ::= \Psi.R \mid \Psi$</td>
</tr>
</tbody>
</table>
A consequence of the uniform treatment of meta-terms is that the design of the computation language is modular and parametrized over meta-terms and meta-types. This has two main advantages: First, we can in principle easily extend meta-terms and meta-types without affecting the computation language; second, it will be key to a modular, clean design.
The above definition gives rise to a compact treatment of meta-context. A meta-variable \( X \) can denote a meta-variable \( u \), a parameter variable \( p \), or a context variable \( \psi \). Meta substitution \( C/X \) can represent \( \hat{\Psi}.R/u \), or \( \Psi/\psi \), or \( \hat{\Psi}.x/p \), or \( \hat{\Psi}.p[\pi]/p \) (where \( \pi \) is a variable substitution so that \( p[\pi] \) always produces a variable). A meta declaration \( X:U \) can stand for \( u : \hat{\Psi}.P \), or \( p : \#\Psi.A \), or \( \psi : G \). Intuitively, as soon as we replace \( u \) with \( \hat{\Psi}.R \) in \( u[\sigma] \), we apply the substitution \( \sigma \) to \( R \) hereditarily. The simultaneous meta-substitution, written as \([\theta]\), is a straightforward extension of the single substitution. For a full definition of meta-substitutions, see [9, 4]. We summarize the typing rules for meta-objects below.
\[
\begin{align*}
\Delta \vdash C : U & \quad \text{Check meta-object } C \text{ against meta-type } U \\
\Delta \vdash \cdot : G & \quad \text{We write } \hat{\Psi} \text{ for a list of variables obtained by erasing the types from the context } \Psi. \\
\Delta \vdash \psi : G & \quad \text{We have omitted the rules for parameter types } \#\Psi.A \text{ because they are not important for the further development. Intuitively an object } R \text{ has type } \#\Psi.A \text{ if } R \text{ is either a concrete variable } x \text{ of type } A \text{ in the context } \Psi \text{ or a parameter variable } p \text{ of type } A \text{ in the context } \Psi. \text{ This can be generalized to account for re-ordering of variables allowing the parameter variable } p \text{ to have some type } A' \text{ in the context } \Psi \text{ s.t. there exists a permutation substitution } \pi \text{ on the variables such that } \Psi \vdash \pi : \Psi' \text{ and } A = [\pi]A'.
\end{align*}
\]
### 5.3 Well-founded Structural Subterm Order
There are two key ingredients to guarantee that a given function is total: we need to ensure that all the recursive calls are on smaller arguments according to a well-founded order and the function covers all possible cases. We define here a well-founded structural subterm order on contexts and contextual objects similar to the subterm relations for LF objects[10]. For simplicity, we only consider here non-mutual recursive type families; those can be incorporated using the notion of subordination [18].
We first define an ordering on contexts: \( \Psi \leq \Phi \), read as “context \( \Psi \) is a subcontext of \( \Phi \)”, shall hold if all declarations of \( \Psi \) are also present in the context \( \Phi \), i.e., \( \Psi \subseteq \Phi \). The strict relation \( \Psi < \Phi \), read as “context \( \Psi \) is strictly smaller than context \( \Phi \)” holds if \( \Psi \leq \Phi \) but \( \Psi \) is strictly shorter than \( \Phi \).
Further, we define three relations on contextual objects \( \hat{\Psi}.M \): a strict subterm relation \( \prec \), an equivalence relation \( \equiv \), and an auxiliary relation \( \preceq \).
\[
\begin{align*}
\hat{\Psi} \preceq \hat{\Phi} & \text{ or } \hat{\Phi} \preceq \hat{\Psi} \\
& \text{ if } \pi \text{ is a variable subst. s.t. } M = [\pi]N \\
\hat{\Psi}.M \equiv \hat{\Phi}.N & \text{ for some } 1 \leq i \leq n \\
\hat{\Psi}.M \prec \hat{\Phi}.N & \text{ for some } 1 \leq i \leq n \\
\hat{\Psi}.M \subseteq \hat{\Phi}.N & \text{ for some } 1 \leq i \leq n \\
\hat{\Psi}.M \subseteq \hat{\Phi}.N & \text{ nil}
\end{align*}
\]
\( \hat{\Psi}.M \) is a strict subterm of \( \hat{\Phi}.N \) if \( M \) is a proper subterm of \( N \) modulo \( \alpha \)-renaming and weakening. Two terms \( \hat{\Psi}.M \) and \( \hat{\Phi}.N \) are structurally equivalent, if they describe the same
term modulo $\alpha$-renaming and possible weakening. To allow mutual recursive definitions and richer subterm relationships, we can incorporate subordination information and generalize the variable substitution $r$ (see for example [10] for such a generalization). Using the defined subterm order, we can easily verify that the recursive calls in the examples are structurally smaller.
The given subterm relation is well-founded. We define the measure $||\Psi||$ of a ground context $\Psi^0$ or its erasure $\hat{\Psi}$ as its length $|\Psi|$. The measure $||\hat{\Psi}.M||$ of a contextual object $\hat{\Psi}.M$, is the measure $||M||$ of $M$. The latter is defined inductively by:
$$ ||h \cdot M_1 \ldots M_n \text{ nil}|| = 1 + \max(||M_1||, \ldots, ||M_n||) $$
$$ ||\lambda x. M|| = ||M|| $$
**Theorem 3** (Order on contextual objects is well-founded). Let $\theta$ be a grounding meta-substitution.
1. If $C \prec C'$ then $||[\theta[\Psi]]C|| < ||[\theta[\Psi]]C'||$.
2. If $C \equiv C'$ then $||[\theta[\Psi]]C|| = ||[\theta[\Psi]]C'||$.
3. If $C \preceq C'$ then $||[\theta[\Psi]]C|| \leq ||[\theta[\Psi]]C'||$.
### 5.4 Case Splitting
Our language allows pattern matching and recursion over contextual objects. For well-formed recursors (rec-case $c$ of $b$) with invariant $I = \Pi \Delta \Pi X:U.\tau$, branches $b$ need to cover all different cases for the argument $C$ of type $U$. We only take the shape of $U$ into account and generate the unique complete set $U_{\Delta,U}$ of non-overlapping shallow patterns by splitting meta-variable $X$ of type $U$.
If $U = \Psi.P$ is a base type, then intuitively the set $U_{\Delta,U}$ contains all neutral terms $R = H.S$ where $H$ is a constructor $c$, a concrete variable $x$ from $\Psi$ or a parameter variable $p[\text{id}_\psi]$ denoting a variable from the context variable $\psi$, and $S$ is a most general spine s.t. the type of $R$ is an instance of $P$ in the context $\Psi$. We note that when considering only closed terms it suffices to consider only terms with $H = c$. However, when considering terms with respect to a context $\Psi$, we must generate additional cases covering the scenario where $H$ is a variable—either a concrete variable $x$ if $x:A$ is in $\Psi$ or a parameter variable if the context is described abstractly using a context variable $\psi$.
If $U$ denotes a context schema $G$, we generate all shallow context patterns of type $G$. This includes the empty context and a context extended with a declaration formed by $\Psi, x:A$.
From $U_{\Delta,U}$ we generate the complete minimal set $C = \{\Delta; r_{i_k}, \ldots, r_{i_1}, r_{i_0} | 1 \leq i \leq n\}$ of possible, non-overlapping cases where the $i$-th branch shall have the well-founded recursive calls $r_{i_k}$, ..., $r_{i_1}$ for the case $r_{i_0}$. For the given branches $b$ to be covering, each element in $C$ must correspond to one branch $b_i$.
**Splitting on a Contextual Type**
Following [5, 17], the patterns $R$ of type $\Psi.P$ are computed by brute force: We first synthesize a set $H_{\Delta,\Psi}$ of all possible heads together with their type: constants $c \in \Sigma$, variables $x \in \Psi$, and parameter variables if $\Psi$ starts with a context variable $\psi$.
$$ H_{\Delta,\Psi} = \{ (\Delta; \Psi \vdash c : A) | (c:A) \in \Sigma \} $$
$$ \cup \{ (\Delta; \Psi \vdash x : A) | (x:A) \in \Psi \} $$
$$ \cup \{ (\Delta, \overline{X:U}; p:\#(\psi.B'); \Psi \vdash p[\text{id}_\psi] : B') | \Psi = \psi, \Psi^0 \text{ and } \psi.G \in \Delta \text{ and } \exists \overline{X:U}.B \in G \} $$
and
$$ \text{genMV} (\psi.A_i) = (X_i; U_i, M_i) \text{ for all } i, \text{ and } B' = [M/x]B \} $$
Generation of a lowered meta variable
\[ \text{\texttt{genMV}} (Ψ, A) = (X : U, M) \]
Extesning \( R : A \leftarrow P / (Δ', θ, R_0) \) to most general term \( R_0 : \emptyset P \).
\[ \Delta; Ψ \vdash R : A \leftarrow P / (Δ, θ, R_0) \]
Splitting on a Context Schema
\[ \text{genMV} (Ψ, \Pi x : A. P) = (u : (Ψ, A. x), λ x. u[\text{id}(Ψ, x : A)]) \text{ for a fresh meta variable } u \]
Theorem 4
\[ U_{Δ; Ψ, P} = \{ (Δ'' \vdash φ, R : Ψ, Q) \mid (Δ''; Ψ \vdash H : A) \in H_{Δ; Ψ} \text{ and } Δ'; Ψ \vdash H : A \leftarrow P / (Δ'', θ, R) \text{ and } Φ = [\emptyset]Ψ \text{ and } Q = [\emptyset]P \} \]
Splitting on a Context Schema
Splitting a context variable of schema \( G \) generates the empty context and the non-empty contexts \( (φ, x : B') \) for each possible form of context entry \( \exists Φ. B ∈ G \).
\[ U_{Δ; G} = \{ (Δ \vdash \cdot : G) \} \]
∪ \{ (Δ, φ : G, X : U) \leftarrow (φ, x : [M/x] B : G) \mid \phi \text{ a fresh context variable and } \exists x : A. B \in G. \text{genMV (ψ, A_i) = (X_i : U_i, M_i) for all } i \}
Theorem 4 (Splitting on meta-types). The set \( U_{Δ; U} \) of meta-objects generated is non-redundant and complete.
Proof. \( U_{Δ; G} \) is obviously non-redundant. \( U_{Δ; Ψ, P} \) is non-redundant since all generated neutral terms have distinct heads. Completeness is proven by cases.
\section{Generation of Call Patterns and Coverage}
Next, we explain the generation of call patterns, i.e. well-founded recursive calls as well as the actual call pattern being considered.
Well-founded Recursion over Contextual Objects
**Definition 5** (Generation of call patterns). Given the invariant $\mathcal{I} = \Pi(\Delta_1, X_0:U_0).\tau_0$ where $\Delta_1 = X_n:U_n, \ldots, X_1:U_1$, the set $\mathcal{C}$ of call patterns $(\Delta_i; r_{i1}:\tau_{i1}, \ldots, r_{ik}:\tau_{ik}, r_{i0})$ is generated as follows:
- For each meta-object $\Delta_i \vdash C_{i0}: V_i$ in $\mathcal{U}_{\Delta_0} \cup U_0$, we generate using unification, if possible, a call pattern $r_{i0} = f \ C_{i0} \ldots C_{i1} \ C_{i0}$ s.t. $\tau_{i0} = \Pi[C_{i0}/X_n, \ldots, C_{i1}/X_1, C_{i0}/X_0]\tau_0$ and $\Delta_i \vdash r_{i0} : \tau_{i0}$. This may fail if $V_i$ is not an instance of the scrutinee type $U_0$; then, the case $C_{i0}$ is impossible.
- Further, for all $1 \leq j \leq k$, $\Delta_i = Y_k:V_k, \ldots, Y_1:V_1$, we generate a recursive call $r_{ij} = f \ C_{j0} \ldots C_{j1} \ Y_j$ s.t. $\tau_{ij} = \Pi[C_{j0}/X_n, \ldots, C_{j1}/X_1, Y_j/X_0]\tau_0$ and $\Delta_i \vdash r_{ij} : \tau_{ij}$, if $Y_j < C_{j0}$. This may also fail, if $V_i$ is not an instance with $U_0$; in this case $V_i$ does not give rise to recursive call.
**Theorem 6** (Pattern generation). The set $\mathcal{C}$ of call patterns generated is non-redundant and complete and the recursive calls are well-founded.
**Proof.** Using Theorem 4 and the properties of unification. △
**Definition 7** (Coverage). We say $\mathbf{\overline{b}}$ covers $\mathcal{I}$ if for every $\Delta_i; r_{i1}:\tau_{i1}, \ldots, r_{ik}:\tau_{ik}, r_{i0} \in \mathcal{C}$ where $\mathcal{C}$ is the set of call patterns given $\mathcal{I}$, we have one corresponding $\Delta_i; r_{i1}:\tau_{i1}, \ldots, r_{ik}:\tau_{ik}, r_{i0} \Rightarrow e_i \in \mathbf{\overline{b}}$ and vice versa.
### 7 Termination
We now prove that every well-typed closed program $e$ terminates (halts) by a standard reducibility argument; closely related is [20]. The set $\mathcal{R}_\tau$ of reducible closed programs $\cdot ; \vdash e : \tau$ is defined by induction on the size of $\tau$.
<table>
<thead>
<tr>
<th>Contextual Type</th>
<th>$\mathcal{R}<em>{\mathcal{U}</em>\tau}$</th>
<th>${ e \mid \cdot ; \vdash e : [\mathcal{U}] \text{ and } e \text{ halts} } $</th>
</tr>
</thead>
<tbody>
<tr>
<td>Function Type</td>
<td>$\mathcal{R}_{\tau' \rightarrow \tau}$</td>
<td>${ e \mid \cdot ; \vdash e : \tau' \rightarrow \tau \text{ and } e \text{ halts and } \forall e' \in \mathcal{R}<em>{\tau'}. e e' \in \mathcal{R}</em>{\tau} } $</td>
</tr>
<tr>
<td>Dependent Type</td>
<td>$\mathcal{R}_{\mathcal{I}[X.U].\tau}$</td>
<td>${ e \mid \cdot ; \vdash e : \Pi[X.U].\tau \text{ and } e \text{ halts and } \forall C. s.t. \vdash C : U. e C \in \mathcal{R}_{\mathcal{I}[C/X].\tau} } $</td>
</tr>
<tr>
<td>Context</td>
<td>$\mathcal{R}_{\Gamma}$</td>
<td>${ \eta \mid \cdot ; \vdash \eta : \Gamma \text{ and } \eta(x) \in \mathcal{R}_{\tau} \text{ for all } x \in \tau } $</td>
</tr>
</tbody>
</table>
For the size of $\tau$ all meta types $U$ shall be disregarded, thus, the size is invariant under meta substitution $C/X$. We also note that since reduction $e \rightarrow e'$ is deterministic, $e$ halts if and only if $e'$ halts.
**Lemma 8** (Expansion closure).
1. If $\cdot ; \vdash e : \tau$ and $e \rightarrow e'$ and $e' \in \mathcal{R}_\tau$, then $e \in \mathcal{R}_\tau$.
2. If $\cdot ; \vdash e : \tau$ and $e \rightarrow^* e'$ and $e' \in \mathcal{R}_\tau$, then $e \in \mathcal{R}_\tau$.
**Proof.** The first statement, by induction on the size of $\tau$. The second statement, inductively on $\rightarrow^*$. △
**Lemma 9** (Fundamental Lemma).
**Proof.** By induction on $\Delta; \Gamma \vdash e : \tau$. In the interesting case of recursion $\text{rec-case}$, we make essential use of coverage and structural descent in the recursive calls. △
**Theorem 10** (Termination). If $\cdot ; \vdash e : \tau$ then $e$ halts.
**Proof.** Taking the empty meta-context $\Delta$ and empty computation-level context $\Gamma$, we obtain $e \in \mathcal{R}_\tau$ by the fundamental lemma, which implies that $e$ halts by definition of $\tau$. △
8 Related Work
Our work is most closely related to [16] where the authors propose a modal lambda-calculus with iteration to reason about closed HOAS objects. In their work the modal type □ describes closed LF objects. Our work extends this line to allow open LF objects and define functions by pattern matching and well-founded recursion.
Similar to our approach, Schürmann [15] presents a meta-logic \( \mathcal{M}^2 \) for reasoning about LF specifications and describes the generation of splits and well-formed recursive calls. However, \( \mathcal{M}^2 \) does not support higher-order computations. Moreover, the foundation lacks first-class contexts, but all assumptions live in an ambient context. This makes it less direct to justify reasoning with assumptions, but maybe more importantly complicates establishing meta-theoretic results such as proving normalization.
Establishing well-founded induction principles to support reasoning about higher-order abstract syntax specifications has been challenging and a number of alternative approaches have been proposed. These approaches have led to new reasoning logics—either based on nominal set theory [14] or on nominal proof theory [7]. In contrast, our work shows that reasoning about HOAS representations can be supported using first-order logic by modelling HOAS objects as contextual objects. As a consequence, we can directly take advantage of established proof and mechanization techniques. This also opens up the possibility of supporting contextual reasoning as a domain in other systems.
9 Conclusion
We have developed a core language with structural recursion for implementing total functions about LF specification. We describe a sound coverage algorithm which, in addition to verifying that there exists a branch for all possible contexts and contextual objects, also generates and verifies valid primitive recursive calls. To establish consistency of our core language we prove termination using reducibility semantics.
Our framework can be extended to handle mutual recursive functions: By annotating a given \texttt{rec-case} expression with a list of invariants using the subordination relation, we can generate well-founded recursive calls matching each of the invariants. Based on these ideas, we have implemented a totality checker in Beluga. We also added reasoning principles for inductive types [4] that follow well-trodden paths; we must ensure that our inductive type satisfies the positivity restriction and define generation of patterns for them.
Our language not only serves as a core programming language but can be interpreted by the Curry-Howard isomorphism as a proof language for interactively developing proofs about LF specifications. In the future, we plan to implement and design such a proof engine and to generalize our work to allow lexicographic orderings and general well-founded recursion.
Acknowledgements We thank Sherry Shanshan Ruan for her work during her Summer Undergraduate Research Internship in 2013 at the beginning of this project.
References
|
{"Source-Url": "http://www2.tcs.ifi.lmu.de/~abel/tlca15.pdf", "len_cl100k_base": 13620, "olmocr-version": "0.1.51", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 64905, "total-output-tokens": 15841, "length": "2e13", "weborganizer": {"__label__adult": 0.000354766845703125, "__label__art_design": 0.00039577484130859375, "__label__crime_law": 0.0003390312194824219, "__label__education_jobs": 0.000896453857421875, "__label__entertainment": 7.37905502319336e-05, "__label__fashion_beauty": 0.00015747547149658203, "__label__finance_business": 0.0002160072326660156, "__label__food_dining": 0.0004224777221679687, "__label__games": 0.0005984306335449219, "__label__hardware": 0.0008950233459472656, "__label__health": 0.0005793571472167969, "__label__history": 0.0002689361572265625, "__label__home_hobbies": 0.00011968612670898438, "__label__industrial": 0.0005068778991699219, "__label__literature": 0.00033736228942871094, "__label__politics": 0.00030517578125, "__label__religion": 0.0006461143493652344, "__label__science_tech": 0.0299530029296875, "__label__social_life": 9.948015213012697e-05, "__label__software": 0.00421905517578125, "__label__software_dev": 0.95751953125, "__label__sports_fitness": 0.00032329559326171875, "__label__transportation": 0.0006623268127441406, "__label__travel": 0.00020802021026611328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50985, 0.01567]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50985, 0.4]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50985, 0.77401]], "google_gemma-3-12b-it_contains_pii": [[0, 2977, false], [2977, 6437, null], [6437, 10345, null], [10345, 14256, null], [14256, 19018, null], [19018, 21984, null], [21984, 25761, null], [25761, 29650, null], [29650, 31577, null], [31577, 35726, null], [35726, 39407, null], [39407, 40949, null], [40949, 44999, null], [44999, 48379, null], [48379, 50985, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2977, true], [2977, 6437, null], [6437, 10345, null], [10345, 14256, null], [14256, 19018, null], [19018, 21984, null], [21984, 25761, null], [25761, 29650, null], [29650, 31577, null], [31577, 35726, null], [35726, 39407, null], [39407, 40949, null], [40949, 44999, null], [44999, 48379, null], [48379, 50985, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50985, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50985, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50985, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50985, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50985, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50985, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50985, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50985, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50985, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50985, null]], "pdf_page_numbers": [[0, 2977, 1], [2977, 6437, 2], [6437, 10345, 3], [10345, 14256, 4], [14256, 19018, 5], [19018, 21984, 6], [21984, 25761, 7], [25761, 29650, 8], [29650, 31577, 9], [31577, 35726, 10], [35726, 39407, 11], [39407, 40949, 12], [40949, 44999, 13], [44999, 48379, 14], [48379, 50985, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50985, 0.06716]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
aa8b03f8c7a0ba432c7f19662633f11df9cbbd51
|
A MODEL-DRIVEN APPROACH TO THE INTEGRATION OF PRODUCT MODELS INTO CROSS-DOMAIN ANALYSES
REVISED: February 2015
PUBLISHED: March 2015 at http://www.itcon.org/2015/17
EDITOR: Rezgui Y.
Ulrich Hartmann, Associate Research Scientist (until Oct. 2014) and PhD student, Karlsruhe Institute of Technology KIT, Department Building Lifecycle Management;
Ulrich_Hartmann@gmx.de
Petra von Both, Professor, Karlsruhe Institute of Technology KIT, Department Building Lifecycle Management; Petra.vonBoth@kit.edu
SUMMARY: During the many phases in the lifecycle of a building, an ever growing number of dynamic and static aspects are encountered that seem worthy of modelling in a computer-readable representation. Product models that attempt to cover the concepts from the many disciplines or phases in a product’s lifecycle face a dilemma, namely maintaining a balance between growing model complexity on the one hand, and user requests for additional concept coverage on the other. In examining the main principles of model theory (Stachowiak 1973) more closely, this paper tries to identify tendencies that may hamper the balance between complexity and completeness. However, a scientific evaluation of model complexity needs objective measures. Metrics for the assessment of software complexity are readily available and can also be applied to models, since models (schema or instance) can automatically be transformed into a programming language (Hartmann, von Both 2010) or an instance structure. These metrics can therefore be helpful in the discussion about model complexity. Different approaches have been taken to handling model complexity and the complex process of modelling itself. Leal S. et al. (2014) develop a template-based model generating tool for energy simulation models to evade the lossy and complex recourse to IFC or gbXML models. Cao J. et al. (2014) use a transformation tool to generate building energy performance simulation models in order to reduce the complexity otherwise encountered in traditional building simulation programs. Thomas D. et al. (2014) combine a CitySim model holding a simplified representation of the surrounding buildings with models for the EnergyPlus building performance engine. This allows for rapid assessment of the performance of the early design-stage building information models (BIM) on both the building and urban systems scale. Koene F. et al. (2014) take the reductionist idea of model theory a step further by reducing a building to a simple model consisting of two thermal masses.
The Building Lifecycle Management Department at Karlsruhe Institute of Technology KIT conducts analyses beyond the scope of a single building. Analyses often have to take the urban environment into account, meaning that not only building models but also models of the surrounding city area are potentially involved. From the modelling standpoint this raises the question as to whether the city model and building models should be unified into one “super model”, or whether models should be kept separate and the relations between them expressed by means of meta data.
The work presented is a result of the research project ‘‘A model-driven approach to the integration of product models into cross-domain analysis processes’’ (original title: ‘‘Ein modellgetriebener Ansatz zur Integration von Produktmodellen in domänenübergreifende Analyseprozesse’’) sponsored by the federal German research funding organization ‘Deutsche Forschungsgemeinschaft (DFG)’.
KEYWORDS: model integration ,complexity, dominant decomposition, cross-domain model analysis, building lifecycle management, city models, building models, product models IFC, CityGML.
COPYRIGHT: © 2015 The authors. This is an open access article distributed under the terms of the Creative Commons Attribution 3.0 unported (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. INTRODUCTION
Consulting the principles of model theory and the metrics for model complexity, the approach of keeping models separate has been taken. The conceptual framework presented here includes a concept for the specification of interfaces between models. This specification is not restricted to the description of schematic (type-based) relationships, but also allows the expression of constraints on model objects (instances). The separate specification of the interface between the analysis model on one side and the source model(s) on the other, not only facilitates the re-use and replacement of source models but also enables the distribution of work between experts of different disciplines: Domain experts can concentrate on the design of the analysis model, while data modelling experts can concentrate on the mapping expressions between models.
In the AEC sector, the use of models as a means for communication, analysis or collaboration has been common for centuries, including following the arrival of computers. While models were previously expressed using physical or symbolic replicas, computers have given us virtually limitless power of expression. In the field of AEC, many disciplines work together, with each discipline having its own logical export model. With the emergence of computers it became common to transfer working steps to the computer. This initial step didn’t change the conceptual working method, it simply moved the usual steps to the computer to take advantage of improved data handling, e.g. by using it as a digital drawing board or a central repository for the management of drawings, etc. Handling drawings turned out to be a cumbersome process due to the many different formats used by applications to store them. This led to the idea of having a STandard for the Exchange of Product model data, initiated under the abbreviation STEP in 1984. Early research work focused on the distribution of model data in networks using a common typeset for model entities and entity relations (Hartmann, U.; Huhnt, W; Laabs, A; Pahl, P. J., 1990). While the exchange of data between applications was the primary focus of STEP, the aim of its follower, the Industry Foundation Classes (IFC), was the digital representation of buildings a.k.a modelling. The Industry Alliance for Interoperability (now BuildingSMART) was the organizational body that set the goals for the IFCs: to create a formal basis for digital building data in all phases of its lifecycle. Starting in 1995, national and international workgroups of all relevant disciplines shaped out a common product model, intended to be a sufficient basis for collaboration and improve the quality of computer-supported work.
The well-intended approach of bringing all related disciplines under the hood of a single common model faces some obstacles, however. The disciplines often do not share the same perception of an artifact. A wall may be reduced to an axis in structural engineering, a quantity of necessary materials in calculation, a bidding position, etc. In some cases, an entity may be modelled as the unification of all attributes of all disciplines. In other cases, artifacts of the original models of disciplines involved may not be entirely compatible. This may cause losses or redundancies in the unified model.
In this paper, we follow a three-step guideline for the assessment of models. First, we analyze the effects of the so-called “dominant decomposition” (D’Hondt, 2002) to various disciplines, then we look at the main principles of model theory as founded by Stachowiak in 1973 (Stachowiak 1973), and third we examine the reasons for the complexity of models schemas and model instances (discussed in-depth in Hartmann, von Both 2010). As mentioned, multi-model scenarios are of primary interest in this paper. The specifics of these scenarios lead to special requirements for cross-domain modelling.
Taking all the steps into account, we shape out a solution approach for a common modelling problem in urban architectural analysis: the analysis of problems related to buildings within an urban context. Depending on the problem, this often turns out to be a multi-model problem. Although applicable to analyses embracing more than one model, the approach may also ease the mapping between the object model of an application and a (complex) standard model. It may encourage implementers to work on top of a standard model without facing a complexity that is implausible in the light of the tasks the application aims to deal with.
As a generalization of the solution approach, a conceptual framework has been developed. Its intention is to formalize the analysis within a multi-model scenario. Finally, two sample scenarios are given. Also, a software prototype has been developed as a proof of concept.
2. MODEL ASSESSMENT BACKGROUND
2.1 Separation of concerns and the dominant decomposition
The first notion of the idea of separation of concerns (SoC) goes back to Dijkstra in 1976 and Parnas in 1972. It denotes the identification of different concerns in a software system. In the face of the so-called software crisis of the 1970s (Naur and Randell 1969), the phrase was formulated to express the intended encapsulation and modularisation in software systems. Aspect-oriented software development (AOSD), originally presented by Kiczales in 1997, builds on this idea in order to find new ways of identifying cross-cutting concerns. Concerns are regarded as cross-cutting if behaviour of the same kind is spread or repeated over several modules, instead of being encapsulated in a single module.
The unification of concepts from different disciplines in a single model may be seen as a compromise for the sake of interoperability. Those disciplines that see a significant syntactical and/or conceptual gap between their original native model and the integrated model may experience the dominance of the new decomposition as troublesome. This effect has been impressively described in “The Tyranny of the Dominant Model Decomposition” (D’Hondt, 2002). The dominance of the dominant decomposition may cause side effects. The larger the gap between the unified model and the discipline’s native model, the larger the amount of required knowledge about the relationship between them in the application logic that makes use of the unified model. Figure 1 (Filman 2004) illustrates this. The reality, in this case an unsorted set of coloured shapes of different sizes, may be sorted or decomposed using different criteria: size, colour or shape. As soon as one of three criteria becomes the dominant criterion, the other two are scattered across, and are thus cross-cutting concerns.
Expressed in terms of Figure 1, we can see that taking the colour-sorted model as the common unified model “disciplines” focusing on size, as the main modelling criteria have to query the unified model in order to reestablish their native decomposition. They will most likely view the colour-sorted model as being dominant.

Dominant decompositions and product models
Domain models in the AEC disciplines clearly have a higher complexity than the entities shown in Figure 1. Therefore, the unification of domain models into one unified model is an even bigger challenge. The likelihood of the resulting decomposition being regarded as dominant is higher the larger the semantical gap between the resulting unified model and the original domain model.
Figure 2 illustrates this. Bidding positions hold summary information on the entire construction work by defining an object structure advantageous for calculation (Figure 2, right). This decomposition of objects (e.g. aggregates or whole/part-relations) is useful (pragmatic) for solving the underlying problem of concluding contracts on the basis of bidding positions where the controlling of costs is the main focus. The same decomposition is totally useless, for example, for structural mechanics calculations. Summing up all E-Modules and Moments of Inertia to conduct a structural mechanics calculation (Figure 2, left) would simply be wrong. If vice versa the decomposition of the engineering model is also taken for modelling costs, then costs will cross-cut the model, since the intention of the bidding model - to expose aggregated cost values - cannot be modelled. Therefore, from the viewpoint of the bidding domain, the engineering model is a dominant decomposition. Product models happen to be pure data models; they do not implement any behaviour. Therefore the relation between the aggregated position costs and the origin of the cost information (amount of material for each single element) is implicit. In other words, the algorithmic relationship between summary cost and the many source elements with their single cost values is not explicitly expressed by the model. Both sides may be altered independently and unnoticed. This redundancy inside the model may cause contradictions and inconsistent model states. In our case, there is no functional relation between the cost of a single part and the summary costs information. In well-established product models like the IFCs, this phenomenon is commonly observable. It is in the responsibility of applications to handle inherent and implicit model design decisions correctly by means of the application logic.
2.2 Modelling principles
Models are made for a purpose. This reflects the modelling principle of pragmatism. The purpose may be to conduct analysis for a set of given problems, lossless exchange of product model data between participants in a workflow, description of physical product properties, description of the construction processes, etc. The modelling principles of representation, reduction, abstraction and pragmatism expressed in general by model theory acquire their own shape in specific disciplines. Engineering sciences define their own concepts and models (representations) by reducing the more general concepts of physics to the level of detail that is necessary.
to describe the respective problem (pragmatism). As a result, in structural mechanics a beam will not be calculated on a molecular level. The engineering model consists of just a few parameters yielding results that can be calculated with reasonable effort. Interfaces between model elements specify the action and reaction between related elements. Computer sciences have developed their own specific adaptations of modelling principles. The Interface Segregation Principle (ISP) states that modules should depend on each other and on the leanest possible interface. Module users should not be forced to call interfaces unrelated to their context. The motivation for this postulation lies in the limitation of dependencies between modules. Conceptually unnecessary dependencies can be the reason for annoying conflicts or unnecessary complexity. For example, a building may either be modelled as a collection of rooms or as a collection of walls and plates. Rooms can be calculated using the distances between walls and slabs, or alternatively, walls and slabs can be calculated using the gaps between rooms. In a building model, the concept of rooms cannot be expressed independently from the concept of walls and slabs. Otherwise it would be possible to specify a room whose geometry collides with walls and slabs. When using model types in which this type of redundancy is possible (e.g. IFC), and it is up to the application logic to solve these conflicts. An example may serve to illustrate this: In the object-oriented paradigm, properties of an object are modelled in a “has-a” relationship. For example, an object has a geometry or a storey has a collection of rooms. This modelling approach ensures that subordinate structures are discarded when the parent structure is deleted. The list of rooms does not make sense once the storey has been removed. IFC follows another approach called INVERSE reference. Rooms are assigned to a storey via relation objects (IfcRelContainedInSpatialStructure). Consequently, if the storey is deleted, its rooms and the relations object remain. It needs internal knowledge of applications to implement the has-a behaviour, and merely removing the storey object is insufficient. Applications are forced to implement interfaces in order to maintain model consistency otherwise unnecessary for the native purposes of the application not in line with the ISP. While this is an example of a structural-modelling pitfall, redundancies can also be found. For example, IFC models may hold calculated floor areas alongside a room’s length and width values, the calculated thermal transmittance values of walls alongside layered walls, etc. Changing one side does not trigger an update of the other side of the dependency. Where cause and reason are not obvious, model users either do not trust the calculated results (and prefer to recalculate the values on the basis of their original input parameters) or otherwise accept possible inconsistencies. There is another issue with this approach: Applications that decide to recalculate room geometry from wall positions can be terribly wrong. If rooms were the leading elements, which one of the original geometrical attributes should be recalculated? This modelling decision cannot be expressed explicitly in most model types (e.g. IFC). As a layer between an application and a standard model like IFC, the approach presented in this paper can encapsulate solutions to the above-mentioned problems. It can free applications from having to fill design gaps between the native object model and an external object model.
2.3 Model complexity
BuildingSMART, the makers of the IFCs, want to support the participation of different disciplines in building lifetime processes by defining subschemas of the entire IFC model schema. The so-called Model View Definitions (MVDs) provide a subset of model elements appropriate for collaboration between specific disciplines. This encapsulation of concepts also helps reduce complexity. So far, MVDs have been published for the overall coordination of model integration, for structural analysis, and for the exchange of basic facility management data. Being subschemas, they do not introduce new IFC schema elements, but are assembled using the available IFC concepts. New decompositions, e.g. aggregated values, are therefore only possible within the complete framework of the IFCs. When working with the IFCs, applications are still faced with the high formal complexity of IFC constructs. The formal complexity of the IFCs could be one of the main reasons why there are not more than just a few small downstream applications on the market using IFC. The BuildingSMART website lists around 160 applications supporting IFC import, with roughly half of them also supporting IFC export. Only the big players have the resources to implement full IFC support. The participation of around 30 companies in the IFC certification process shows this. The deep chains of indirections in the IFC schema are relics of the relational database background in the long history of this data model. Unfortunately, the relational constructs have made their way into the IFC XML schema. This becomes obvious in constructs such as IFCRelation (and derived classes). They resemble foreign key constructs of the relational modelling paradigm and do not align with object-oriented design patterns. As an example, in the object-oriented world, all the attributes of an object are deleted at the end of the lifetime of the containing object (part-of relationship). Not so with IFC objects. Since the attributes of an object are being stored in a second separate object, and the object itself is unaware of
this second object, the lifetime of the main object is not in sync with the object holding the attributes. In a relational database this would not pose a problem, since the referential integrity would be maintained by the database management system. This phenomenon - known as the “object-relational impedance mismatch”- makes it even more difficult for object-oriented applications to work with the IFCs. Loose coupling in the object-oriented world decouples the lifetimes of the two related objects by intention. Since IFC lacks the capability to model part-of relations correctly, it is up to applications to decide whether to apply strong or loose coupling of sub-elements. Another issue is the existence of two parallel ID-systems. IDs have to be unique inside their area of validity (scope). In the IFCs, the two ID-systems have two different scopes: the GlobalID-system has global (worldwide) scope, whereas the simple ID-system has file-wide scope. The reference mechanism used in IFCRelation classes uses the ID-system with file-wide scope. Therefore, aggregates can only be defined if all referenced elements are contained in the file. This physical linkage to the storage representation is especially painful, because it inhibits the modelling of components that could be re-used and assembled in different contexts - an established and well-proven modelling scenario in mechanical engineering disciplines. When talking about complexity, a discussion can become potentially vague and based on personal assumptions. Metrics have been developed to express the complexity of software in general and models in particular, and also to prevent vagueness. Metrics for the analysis of product model complexity are discussed in “Metrics for the Analysis of Product Model Complexity” (Hartmann, Ulrich; von Both, Petra 2010), where a comparison is made between the complexity of the IFCs and CityGML. The metrics presented there can also be used to estimate the influence on complexity of the approach presented in this paper.
2.4 Multi-model scenarios
Real-world data is of primary interest for conducting realistic analyses. It is stored in databases or files using a data model that has been previously defined (e.g. XML schema, database schema, etc.). Before conducting a model-based analysis on real-world data, it must be ensured that the data model of the data source contains sufficient data for the analysis. This is not a matter of quantity but of the element types and properties encountered in the source data model. Sometimes it becomes apparent that there is no single source data model that holds all the information (types, attributes, etc.) necessary to conduct the envisioned analysis. Many problems in the context of architecture go beyond the scope of one single building. This is the case if the urban environment also needs to be taken into account. In these cases it is tempting to demand a unified city+building data model for such domain-spanning problem sets. In fact, efforts have been made to import IFC building information into city models (e.g. the LOD concept in CityGML). Enriching existing models by adding concepts or attributes of other model schemas may be reasonable in some cases, but in view of the considerations regarding model theory and model design referred to previously, format conversions and data imports cannot serve as a general approach for multi-model management. Taken to the extremes, the huge bandwidth of imaginable problems would in the end lead to complete unification of countless different model types. The resulting supermodel would either be ambiguous and inherently inconsistent because of the many intertwined and unrelated decompositions, or favour one decomposition. For those huge models like the IFCs, much work has been done to avoid ambiguities between participating subdomains. This harmonization process cannot always be 100% complete, but it always ends up with a decomposition being partly viewed as dominant by specific participants.
Multi-model analysis scenarios can either include data models that are completely disjunct or have a conceptual overlap. In the first case, models do not share similar concepts. For example, comparing the element types of a model for future energy cost trends is totally unrelated to a building model. This is crucial, since energy saving investments can only be realistic if the costs for energy conservation (e.g. better thermal isolation) and the lifetime energy costs are both included in the analysis. The dependency between energy costs and the costs for energy conservation can then be analysed by accessing both independent models plus a linking model in between. In this case, a model of the thermal energy flow is the link between the physical characteristics (structure, material) of the building and the absolute amount of thermal energy that can be transformed into costs, given a cost-per-energy unit factor.
If a conceptual overlap between the models exists, it may be the enabling factor for the joined participation of both models in an analysis. Imagine a citywide analysis of the potential solar energy that might be captured within the city area. In the city model, the concept of a building exists, but it is not precise enough to extract relevant information about building roofs such as size and orientation (where solar equipment could be placed)
or even the suitability of the underlying construction. A building model on the other hand does not have enough urban context information (location, orientation, cadastral assignment).
Another issue of practical importance for data handling is the following: Assuming that logically-identical buildings will have identical IDs in both data models is unrealistic. Other characteristics have to be consulted to identify associated objects in both models. In this example, street name and number may occur in both models, thus enabling a non-ambiguous paring of both occurrences.
So far, we have seen some of the problems that can arise if data models form the basis for cross-domain analysis. The main problems are:
- ID ambiguities between the different data models
- Different (“dominant”) decompositions
- Different conceptual design decisions (loose coupling versus fixed compositions, indirections, aggregated values vs. detailed attribution)
- Paradigm breaks (impedance mismatch, e.g. object-oriented vs. relational)
- Different syntax
2.5 Requirements for cross-domain models
Analysis models constructed from source models of different domains should not suffer from the formal and conceptual complexity mentioned above. Pushing the characteristics and complexity of source models forward into the analysis model would obviously hamper the implementation of analysis algorithms. A level of abstraction between analysis models and source models is necessary. In this sense, the analysis model is a model of a model, that is, an additional meta level on top of the existing abstraction level. Therefore, the well-known principles of modelling (abstraction, reduction, representation, and pragmatism) also hold true for the creation of analysis models from source models. Analysis models in particular have to be more abstract than the base models they rely upon, because analysis models address specific problem domains, in contrast to common standard models such as IFC or CityGML, which address a whole spectrum of participating disciplines. This goal can only be reached by the reduction of the types exposed by the source models to the absolute minimum required for the analysis. Currently, some common standard models are envisioned as an all-purpose model for a whole league of disciplines. Although conceptually striking, the concept of a universal language has not yet reached a significant level of relevance. Humans prefer to represent their thoughts in many different languages specific to their cultures. All languages have their own advantages and expressiveness that make them a unique part of their culture. Putting it this way, an analysis model uses concepts unique to the respective problem domain. The language concepts are special to this particular domain, expressing the expertise of the discipline in a characteristic and well-chosen terminology. Compared to the overall common language, the set of concepts in a terminology is reduced by number, but extended by semantics. Words from natural languages may be reused, obtaining special meaning in an expert terminology, but the expert language itself is reduced to the absolute minimum necessary to express the concepts of the expert domain. Reduction to this relatively small set of language elements means an expert language can be more precise, less error-prone, and less eloquent. This has been done intentionally for pragmatic reasons: to supply a compact lingo for the expression of problems typical to the domain, and communication between domain experts.
A cross-domain multi-model analysis makes the necessity of an additional abstraction layer on top of the base models especially obvious. However, the implications of model theory and the different perspectives between expert domains and global models still hold true even in the case of only one base model being involved.
Consequently, the abstraction layer between base models and the expert domain model (in this case the analysis model) should support:
- Analysis-specific ontologies
- Formal notations of the problem domain (e.g. naming conventions specific to the expert domain)
- Integration on different scales
A reduced type set
Explicit mapping between source models and analysis model
Analysis algorithms could then concentrate on the complexity of the analysis itself by using the leanest possible interface to the model.
3. META-LEVEL MODELLING APPROACH
How can the requirements for cross-domain models be met? As mentioned earlier, the software crisis gave a special impetus to computer science to overcome complexity problems. The new discipline of software engineering has been created as a bounding box. Not only does it include the adoption of new paradigms such as object-orientatedness, it also sets the focus on the software production process, for example by defining new roles and stakeholders. A similar process can be observed in the field of product lifecycle data management. In the building industry, some product models for different purposes have already emerged. Given their complexity, similar problems, already known from the software crisis, arise. As a solution approach, the principles of general model theory have been adopted and applied, and sample solutions and best practices specified. One of these best practices is the famous set of design patterns for software, originally published by Gamma et al. in 1994, which still remains a standard work today. Although not intended to be a universal remedy, design patterns serve as a library of strategies for commonly-encountered structural and behavioural problems in software design. Product models can take advantage of the expertise and solutions found by computer science, because models are just another piece of software. In our approach, we use the façade pattern to hide complexity from applications and to facilitate a generative implementation approach.
3.1 Applying the model façade pattern
The façade pattern provides a simple interface for packages of higher complexity. As such, it applies the black-box-principle. The reduced complexity exposed through the interface is designed to serve a pragmatic reason: to provide the leanest-possible dependency between the consumer and the supplier of functionality. The interface is a contract that consumers can rely on and implementers have to comply with. Any implementer who fulfils the interface contract is a valid supplier of that functionality. This abstraction creates the desired independence and paves the way for reuse and interchangeability. These general interface considerations have been further specialized in the façade pattern. It provides an abstraction of the underlying packages. In fact, consumers do not need to know about the details of the underlying packages at all.
Transferred to the model scenario, the façade pattern reads as follows (Figure 3): The analysis model provides an interface with an appropriate degree of complexity to hide underlying models and their complexity, and which may be higher than desirable in the given problem context. The mapping façade implements this interface (Figure 3, top). As long as models can comply with the interface contract, they can take part in the implementation of the interface. In Figure 3, model A and model B together yield the necessary information for the implementation of the façade. As an alternative, model B could be replaced by model B’, which also exposes the right interface to fulfil contract a. This abstract design enables the exchange and reuse of models as long as

_{ITcon Vol. 20 (2015), Hartmann & von Both, pg. 260_}
they fulfil the required contract. This also implies that more than one model may be capable of supplying the complete interface requirements. Interface functionality can be provided by a whole set of models if necessary.
Now that abstractions and dependencies have been defined, the implementation process for the interface has to be shaped out. After the target model (here: the analysis model) and the source models have been specified by domain experts, the question is, who implements the mapping between them? Each element in the target model can be assigned to at least one element in the source model. This assignment can be expressed by relating element types, but a pure type-based mapping definition alone may be insufficient. Depending on the problem, it also has to take characteristics of the model objects (instances) into account. As an example, the cardinality of both endpoints may be as general as an m:n-relationship, where m and n can be anything from 1 to infinite. This depends on the problem (e.g. model analysis) and the actual model element instances. Therefore, the implementation of the mapping may include an aggregation of elements from source models to the target model (1: many) or a decomposition of source model elements over target model elements (many: 1) (see Figure 4). Only the implementer of the façade needs to know about this problem-dependent mapping logic. The aggregation of cost elements into a summary cost value is an example of such a mapping. Not only do cardinalities have an impact on the mapping logic, the mapping may be influenced by characteristics that are not direct parts of the mapping partners. This may give the mapping the appearance of a constraint. As an example, in summing up single values, there might be a specific threshold value below which values are ignored. We will see in a later example (section 5) that rooftop segments too small for a solar panel are not counted when calculating the potential rooftop area available for power generation. In Figure 4 the mapping ignores all the properties of the elements in model D except the cost values, which are summed up in a mapping algorithm. The cost information is scattered over the model, therefore the mapping algorithm has to collect this information. To do this, the algorithm needs to know how to calculate volumes and how to apply unit costs.

**Target model ontology**
Experts use the vocabulary and structures commonly accepted in their domains of expertise. The framework concept presented here allows the definition of these elements so that the ontology of the target model reflects the perception of domain experts. The decomposition of a model used in an analysis can then no longer be perceived as dominant (or even tyrannical), because experts can design exactly the decomposition they find useful for a given type of analysis. Formal complexity resulting from the necessity to transform a given structure of a standard model into the usual format of a domain can be kept out of the formulation of the analysis itself, especially out of the logic of the analysis algorithm. The formal complexity of specifying the mapping between elements of two model types has been completely encapsulated inside the mapping component that is managed separately. Through this, the mapping complexity is kept entirely out of the scope of the domain expert, who can concentrate on the analysis problem itself.
Schemaless specification
When talking about data models, we do this mostly on the assumption that the schemas of all participating data models are clearly defined. However, the generation of an analysis model cannot be performed merely as a schema transformation between source model schemas and the schema of the analysis model. As mentioned before, the characteristics not only of types but also of instances can be important. The façade implementation must be able to put this requirement into practice. The question is whether the mapping logic needs to know any detail about each participating schema. More precisely, does it need to know about each complete schema or only about the part that is going to be mapped? Obviously, the schema of the target model must be known to the mapping logic, otherwise it would be impossible to produce the required objects in a valid format. Or to put it another way, the only elements of the target model that are instantiated are those for which a mapping specification exists. In contrast to this, only those elements of the source models occurring in a mapping specification are of interest to the mapping logic. Since the reduction of complexity is the main reason for creating target models from source models, it is very unlikely that the whole range of source model element types will be processed by the mapping logic. The instantiation of the relevant analysis model objects is the main functionality of a façade implementation; the resulting model must of course be a valid representation of the analysis model schema. In an implementation, this can be checked in a post-processing step. It frees the mapping process from the huge type convention overhead. Type-based mapping relations can be specified declaratively without complete coverage of all model element types encountered. Moreover, the type-based mapping specification can be further extended with object-based mapping constraints.
Elements of a mapping specification
Element sets are commonly specified declaratively with query languages such as SQL or OCL. They consist at the very least of the specification of the source elements on which the query operates and the specification of target elements onto which source elements will be projected (mapped). The projection does not need to be one-on-one; it can also include functional or structural relationships. Functional relationships can be simple conversions such as data type or unit conversions, but also algorithmic constructs of any kind, e.g. a sum of cost values. Structural relationships can be aggregations of sub-elements into elements on a higher hierarchical level, e.g. an element for summary cost information. Beyond the simple pass-through of elements, the result set can be further constrained by specifying conditions on source elements. While a correct specification of source elements and projections requires knowledge of the source and the target types, constraints can include characteristics of element types involved (schema knowledge) as well as of element instances (e.g. attribute values) as shown in Figure 5. The mapping specification may reference one, two, or more source models. In SQL or OCL, queries on distinct sets of elements can occasionally be joined based on their concordant IDs. As mentioned above, it cannot be assumed that participating object-oriented data models implement object identity by means of IDs. Therefore, an alternate mechanism is required to express object identity and to support target object construction through the merger of two source model objects.
A complete mapping specification consists of a source model element specification, a target model projection, and optional mapping constraints and – if necessary – a generic ID definition capable of expressing cross-model links (joins) between associated model concepts.

Referencing source model elements
Data model types are valid in their distinct namespaces. To avoid ambiguities between participating source models, it is necessary to specify the namespace of a source-model element in a source-model element specification by specifying its fully-qualified name. Additionally, a method for matching the identities of objects across source models must be specified. This specification allows for the subsequent creation of objects in which the characteristics of both source objects are combined. This is equivalent to inner join declarations in SQL and OCL. The source model element specification has knowledge about the source data model but is completely unaware of the target data model.
Target element projections
Once source model elements have been specified, the target model ontology can be instantiated according to a projection specification that specifies the name and data type of the target model object. As a simple example, an attribute value of a source-model object could be copied onto an attribute value of the target model. Slightly more complex, the sum of many attribute values of a source model could be stored in the respective aggregated attribute value of the target model. There could also be a complex calculation taking different source-model attributes as input parameters and writing the result of the calculation into the target model, e.g. the calculation of a thermal transmission value from layers of a wall element (Figure 5, mapping function). The projection is the only place in the whole mapping specification where elements of both the source and target model are known.
Mapping constraints
Restrictions on the resulting element set can be specified by expressions. Figure 5 shows constraints on source elements and/or on the resulting target elements. Only matching elements on the source-model side will then be respected and instantiated on the target-model side. Theoretically, conditions can be specified on source element attributes or on target model attributes. In the latter case, a target model object will be constructed and potentially discarded at a later stage if constraints are violated. In the former case, source model objects that do not meet conditions will not be part of the set of elements projected into target model elements. The difference can be substantial, since accumulated values may show different results depending on the order in which restrictions are applied. An appropriate order of constraint application cannot be given without knowledge of the problem. In one case the intention may be to filter out values below a certain threshold value, while in other cases the accumulated value itself may be subject to restrictions.
Cross-model identity
Some data models support the concept of global unique identifiers (GUIDs) for all objects. This makes ID-based object mapping an easy task, but the concept is not generally supported. Moreover, in the object-oriented world there is no need for IDs conceptually, and the uniqueness of objects can be achieved another way. It can be achieved through special values for an object’s attributes, thus linking uniqueness closely with the object semantics and not an artificial id value. Since specific attribute values make an object unique, these attributes must be specified in an identity definition. Besides the fully qualified name of the object type, the definition specifies a global name for the identity definition. As an example, the global id of a building in IFC (ifc:IfcBuilding[@GlobalId]) could be linked to the id of a CityGML Building (city:Building@gml:id). Based on these global names, identities can be matched across two or more models. As long as identity attributes in the identity definition of one participating data model can be transformed into the attributes of another model’s identity definition, the matching process can work automatically. This would be the case in simple scenarios where both parts, for example, contain attributes for street name and number in different attributes and different data types. Transformation is then possible in at least one direction.
A different case arises if automatic transformation is impossible. This would be the case, for example, if one data model supports the GUID concept and the other relies on object semantics to support uniqueness. In this case, it might be necessary to specify the mapping relation explicitly in a name value pair construct (e.g. a mapping table). Although the manual creation of a mapping table is a tedious and error-prone task, it might be inevitable in some cases. The functional relation between the IDs of both data model objects then includes a mapping table lookup, as might be the case when mapping IFC building objects (GUIDs) onto CityGML objects (street name / number or cadastral ID). The mapping table would be required for each target / source model pair - clearly an additional effort compared to a merely type-based declarative mapping.
ITcon Vol. 20 (2015), Hartmann & von Both, pg. 263
3.2 Formal mapping notation
The schema of a formal mapping notation has to reflect the requirements mentioned in section 2.5. It has to be able to specify the mapping relation between ontologies of the source models and the ontology of the analysis model. It has to be able to express simple data types as well as complex types. Instances of types (a.k.a. objects) can be organized in different topologies such as deeply nested part-of relations, trees, lists and more. The schema has to support these different kinds of aggregational relations. Figure 6 shows the mapping schema in the form of a class diagram.
As shown in Figure 6, complex types (class ComponentType) can be specified that consist of simple types (class ComponentTypeLeaf) or hold a subordinate level of one or more complex types. With these simple type definitions, objects of arbitrary complexity can be generated. The Name attribute of each schema element specifies the type name the generated object will have in the expert domain. The Structure class serves as a container for a whole set of objects. This, so far, meets the requirement of being able to create arbitrary ontologies on the target side. What about the source of data that makes up generated analysis model objects? The BaseElement class holds the specification of the pool of objects that will be queried. It resembles the FROM clause in the query language OCL used in the framework prototype (section 4.1). The SourceType class resembles the SELECT clause in a query language. It specifies the projection of source data onto target data. As an example, if BaseElement specifies all the objects in a model (e.g. by specifying its enclosing container), then setting the SourceType attribute to “IfcWallStandardCase” and the Name attribute to “Wall” would result in a set of objects all of which were of the Wall type. One Wall object is instantiated for each object of type IfcWallStandardCase encountered. The Wall object does not as yet have any attributes or aggregated sub-objects. This does not satisfy requirements in most cases, and the result set usually needs to be populated and refined further. This can be achieved by constraining the source objects, thus limiting the instantiated target model objects to the amount of source model objects with special characteristics, such as attributes with predetermined values (wall thickness, material and the like). Assigning a child element of the type ComponentTypeLeaf will cause an attribute to be generated in the target model object. The generation of the attribute will follow the same logic as in the case of the generation of the ComponentType object itself: The BaseElement attribute of the mapping specification will specify the pool of objects queried, SourceType will specify the projection. Constraints, the equivalent to a WHERE clause, can be specified in the Attribute field. It specifies a query (SourceType, BaseElement) and an expression that the query result has to meet for the object to be created.
Model-driven façade generation
The concept of a façade between source models and an analysis model can be specified using the formal mapping notation described in section 3.1. In order to put this concept to work, a software prototype has been implemented. In this prototype, the façade is a component that exposes the specified analysis model instance through its programmable interface. In order to generate this analysis model instance, one or more source model instances are specified. Source model types and generated analysis models may vary with every given specification; the mapping logic inside the façade has to reflect this. Therefore the façade logic must be capable of adapting to a new specification either through interpretation of the mapping notation, or by creating the implementation of the mapping logic anew. A broad spectrum of participating model types is possible. Not only can an analysis model expose a wide range of different ontologies, the source models can also be of any type (and not just IFC and CityGML). An interpreter (Figure 7 a) capable of handling all these possible different types and mapping constructs would be overly complex. Therefore, the model-driven software development approach has been taken. It generates a new façade implementation from every specification (Figure 7 b). The code generator is much simpler than a one-fits-all interpreter. In addition, this approach is more abstract, because the façade implements an additional level of abstraction between the Platform-Independent Model (PIM) and the analysis model. In contrast to an interpreter, which would implement the whole analysis model generation process, the model-driven approach separates the mapping logic from the model generation process, especially because the generator has no knowledge of specific model schemas, as shown in Figure 7.
Platform-Independent Model (PIM)
In the model-driven approach, a platform-specific model is generated from a platform-independent model (PIM). In our case, the mapping specification is the PIM that describes the relationship between the analysis model and the source models on an abstract conceptual level in a software-platform-independent way. There are no implications for implementation or software environments. Furthermore, it does not imply any design decision about the software components to be generated. The decision about the structure of the generated façade is made solely in the generator that transfers the PIM into a platform-specific software format that can later be executed in a specific software environment.
PIM-to-PSM Transformation
A generator takes a PIM as input and translates it into a platform specific format, the platform-specific model (PSM), e.g. a programming language. This so-called PIM-to-PSM transformation transforms a model from a higher level of abstraction into a concrete environment, e.g. a software runtime environment. In our case, the model mapping specification is transformed into C# code for the .Net runtime environment. The generated software follows the façade design pattern. Generators could be exchanged for addressing different target platforms or different software designs. This approach is aligned with the concept of model-driven software development (MDSD).
Figure 7: Mapping logic implementation alternatives a) interpretation and b) generated mapping code
ITcon Vol. 20 (2015), Hartmann & von Both, pg. 265
Platform-Specific Model (PSM)
The end product of the model-driven generation of software is the platform-specific model, in this case a software component that implements the façade design pattern. The component is ready to be integrated into the modelling framework. It can access instances of specific source model types as input, and create instances of one specific type of analysis model as output. For each mapping specification a façade component will be generated and integrated into the framework.
4. CONCEPTUAL FRAMEWORK AND SOFTWARE PROTOTYPE
In the approach presented here cross-domain model analysis workflow consists of the following steps (Figure 8):
- **PIM Specification**
1. Analysis model schema definition
2. Selection of appropriate source model types
3. Specification of model mapping
- Generation of a façade component in the PIM-to-PSM Generator using the PIM Specification
- Selection of source model instances
- Creation of an analysis model instance in the model generator using the generated façade
- Specification of an analysis algorithm module
- Application of analysis logic to the analysis model in the analysis manager
- Presentation of the analysis results (e.g. visualizations)
- Storage of analysis results
The PIM Specification steps are covered conceptually and can be specified using the model mapping notation. The following logical steps are performed by further processing the PIM Specification. The software prototype, as a physical realization of the framework concept, consecutively executes all these steps, from façade generation to the application of analysis algorithm. Figure 8 shows the sequence of steps performed by the framework.
Separation into these steps enables the division of tasks between the domain expert and modelling expert. Both experts must collaborate closely to get the mapping of concepts between the analysis model and base models aligned. The modelling expert needs a basic understanding of the analysis model semantics in order to specify an appropriate mapping, but because of the separation, both experts are not confronted with the full complexity of all the participating models. All steps are clearly separated and communicate through well-defined interfaces;
Figure 8: Workflow in the Model Analysis framework
*ITcon Vol. 20 (2015), Hartmann & von Both, pg. 266*
components that implement one step can be exchanged and reused independently. Therefore, the same analysis model definition can be reused, while the source models can be exchanged or the analysis model enhanced while maintaining the underlying source models. Further along in the process, the analysis logic can be re-applied or improved, while the analysis model remains the same.
4.1 Physical framework prototype
The modelling framework runs as a service, without any user interface. It exposes interfaces for analysis model specification, mapping specification, analysis algorithm specification, analysis result storage and visualization. A graphical editor has been developed to ease the specification of the analysis model and model mappings. For the administration of analysis projects, especially the management of participating models, code generation and integration of analysis algorithms, a framework management console has been developed on top of the framework (Figure 8).
The schema of the analysis model is physically expressed in the XML-based model mapping notation. It can be specified in any text editor or XML editing tool and then passed over to the modelling framework. The framework will then validate the model mapping document against the model mapping schema, and formally invalid specifications will be rejected. The manual development of complex schemas would be a tedious and error-prone task. For this reason, the graphical analysis model editor supports domain experts in designing an analysis model schema. Figure 9 shows a screenshot of the graphical editor. The editor generates the XML equivalent of the graphically specified analysis model in the model mapping notation. This XML document holds the notation in a validated format and is ready to be processed by the framework. Analysis projects can be defined in the Framework Management Console (Figure 10). A project consists of a PIM (Tab “Analysis”), as many references to model instances (Tab “Source Models”) as were referenced in the PIM, and an analysis module that implements the analysis algorithm. The framework can then process the project by running through the steps shown in Figure 8.
A domain expert can specify an analysis model agnostic of any concrete model instance. References to model instances in the analysis model definition can be purely symbolic. These references are resolved when the PIM is processed by the framework. The creation of analysis model elements depends on the availability of appropriate
Figure 9: Graphical editor for analysis model definitions
Figure 10: Framework Management Console
*ITcon Vol. 20 (2015), Hartmann & von Both, pg. 267*
source model data. Data may be available in different sources of different types. The mapping specification between the analysis model and source models has to take the schema of the source models into account. Therefore, a decision has to be made on the model types (IFC, CityGML, etc.) serving as data sources before a modelling expert can specify the mapping expressions. The modelling expert uses the XML-based modelling mapping notation (Figure 6). This again can be a challenging task, therefore the graphical analysis model editor supports the specification of model mappings. The editor assures that only valid source model element types can be inserted into the mapping expression. The resulting XML document is then ready to be processed by the framework.
The steps are summarized in Figure 11. In the definition phase, the domain expert and modelling expert work together to specify meta information for the relation between the analysis model and related source models. The mapping specification contains all the information necessary to create the software component that implements the mapping logic. In the processing phase, a model façade using the meta information is created by the framework. The code generation component of the framework takes the model mapping meta information as input and creates platform-specific code (C#.Net), which will be dynamically compiled within the framework into an analysis model component. In the management console, source models are specified and the generated façade can generate an analysis model. The framework manages all the different types of model mapping specifications and the resulting analysis models.

5. EXAMPLE ANALYSES
5.1 Example 1: Public transportation and city quarter information
In this example we want to show cross-model analysis in the urban context. A city administration responsible for public transportation is planning a new bus route to enhance the system. The route should link business areas and residential areas and provide an alternative option to travelling by car, with the aim of reducing office-hour traffic significantly.
**Algorithms for analysis**
Different route alternatives have to be compared (Figure 12). The catchment area of the optimal route would collect most commuters in the morning, and allow them to disembark as close as possible to their workplaces and vice versa in the evening. Although linking everyone’s home with their workplace is impossible, optimizing the traffic route by identifying the embarking and disembarking areas that provide the best coverage still seems statistically sound. The floor area of all the buildings in the covered region is summed up, separated into the different occupancy types, such as ‘office’, ‘private’, etc. In this simple example we exclude other considerations.
such as route length, travelling time, route switching and so on. The focus is on domain-spanning analysis and the advantages of inserting an abstraction layer between the software-centred and domain-knowledge-centred view. We could use virtually the same algorithms to place block heat power plants at optimal locations in the urban area. More importantly, the separation of concerns into two separate physical layers would promote the re-use not only of concepts but also of components. Changing the investigation from bus routes to block heat power plants requires references to be changed in the PIM and minor changes in the algorithm but the underlying models can be re-used.
**Data sources**
The communal cadastral system, often simply a Geographic Information System (GIS), contains the city map with real-estate and city road information. Each real estate entry has a reference to an electronic building document containing an IFC-based model of the building. The technical specification is of no relevance for the concept.
**Basic assumptions**
- The coverage area is calculated assuming a maximum walking distance to the route. Beyond that distance it would not be attractive to utilize the bus line.
- An estimation of the number of people involved is given by the person ratio per square metre of office floor and per square metre of private home floor respectively.
**Involving different algorithms**
As a first rough estimation, the coverage area could be calculated by applying a direct surface-to-surface connection between building and bus route. Buildings within the maximum distance would belong to the coverage area.
In a more meaningful (but also more time-consuming) calculation, the exact walking distance between the building location and bus route could be calculated using a navigation service (e.g. Google Maps).
Pursuing the concept of dynamic business logic involvement through the use of components loaded at runtime, different algorithms for calculation could be consulted declaratively.
---
**Figure 12: Potential routes for a new bus line**
5.2 Example 2: Potential for inner-city green energy production
In this analysis, the potential for solar energy production in a city environment is investigated. To this end, rooftop surfaces suitable for the exploitation of solar energy are calculated. CityGML models with a low level of detail (LOD) can be derived from the base areas of buildings using cadastral information. This approach is simple and inexpensive, but lacks precise rooftop information, as shown in Figure 13. A higher level of detail can be achieved by airborne laser scanning, which is quite expensive, or by importing the required information from IFC models. The latter approach is not taken here since we want to keep redundancies out of our modelling strategy. The mere shape of a rooftop might not supply enough information for the analysis. Also, the construction type of the roof and the utilization of underlying spaces might have to be taken into account, a level of detail not envisaged in CityGML.
From this perspective it becomes apparent that one single model type cannot provide the necessary information for conducting the analysis. Cadastral databases or city models (e.g. in the CityGML format) may - depending on their level of detail - include only rough shapes of buildings, with imprecise or missing information about rooftop areas. On the other hand, building models (e.g. in IFC format) lack the information about the urban context, e.g. shadowing effects by other buildings, orientation, and city quarter or cadastral field assignment. If no appropriate city model with the necessary level of details is at hand, only the selective combination of information contained in both models yields the required result. Other investigations (Alam, N. et al. 2012) either used a CityGML model with the necessary level of detail or considered using airborne laser scanning point clouds (LIDAR) to generate sufficient 3D model data. In our example we assume that IFC models for all city buildings are available. In combination with a city model this is a relatively simple approach for generating the necessary 3D data. The effects of shadows caused by nearby objects could also be examined with this 3D data. This would be a good example of model refinement and re-use, which is clearly possible with our approach. However, the focus here is not on presenting a precise solar insulation model but on showing a modelling approach aligned to the principles of model theory and a complexity scaled to the problem statement.
In this scenario the types of data sources are clear. The city model is given in a file containing CityGML data, and the buildings are supplied in IFC-format files. After the domain expert has passed the model to the modelling expert, mapping assignments between IFC building elements and CityGML building elements can be specified. Each building in a CityGML model needs to be associated with an IFC building. The modelling expert has to find attributes in both building concepts that are suitable for expressing this association. In this example...
we follow a generic approach, and assume that the street name and number attributes are suitable as pseudo-IDs in both models\(^1\). For each building element in the city model, a building element for the analysis model can now be instantiated. Through the pseudo-ID, the size and orientation attributes of this element can be initialized with the respective values of the IFC building element.
The mapping notation for the attribute assignment of a rooftop segment appears as follows:
```
<Structure xmlns:xsd="... omitted for brevity .."
xmlns="http://analysis.com/Namespace">
<!-- DomainObjects $IfcBuildings and $CityGmlBuildings -->
<Name>SolarInsulationModel</Name>
<Component>
<Name>Roofs</Name>
<SourceType Name="Roof" BaseElement="$CityGmlBuildings"/>
<!-- create a Roof object for every CityGML building encountered -->
<Attribute>
<!-- The Roof object has an attribute named CityGmlID -->
<!-- copy the CityGML building ID into the attribute -->
<SourceType Name="ID" BaseElement="$CityGmlBuildings.Building"/>
<Name>CityGmlID</Name>
</Attribute>
<Component>
<!-- The Roof object aggregates an object named Segment for each Slab encountered
for every IfcRoof.Segments element -->
<Name>Segment</Name>
<SourceType Name="Slab" BaseElement="$IfcBuildings.IfcRoof.Segments"/>
<Attribute>
<!-- Attribute SegmentSize copied from
DomainObject IfcBuildings subordinate attribute -->
<SourceType Name="Size" BaseElement="$IfcBuildings.IfcRoof.Segments.Slab"/>
<Name>SegmentSize</Name>
<Value>Size>2.0</Value>
</Attribute>
<Attribute>
<!-- Attribute SegmentOrientation copied from
DomainObject IfcBuildings subordinate attribute -->
<SourceType Name="Orientation"
BaseElement="$IfcBuildings.IfcRoof.RoofSegment.Slab"/>
<Name>SegmentOrientation</Name>
</Attribute>
<Attribute>
<SourceType Name="ID" BaseElement="$IfcBuildings"/>
<Name>IfcID</Name>
<!-- joining element sets based on associated IDs -->
<Value>$IfcBuildings.ID==$CityGmlBuildings.Building.ID</Value>
</Attribute>
</Component>
</Component>
</Structure>
```
\(^1\) In this case mapping would also be possible based on building IDs. This would require a manually-created ID mapping table between the IFC building id and the CityGML building id.
Given the fact that this mapping specification operates on IFC elements, the above XML notation looks surprisingly simple. Normally one would have to dereference a tedious chain of dereferencing expressions in order to get from IfcRoof elements to its related segments (roof slabs), and from there to the surface area of a segment and the normal vector of the segment surface. This would clearly overstrain the patience of the reader, and also of the user of the mapping notation. Therefore, the framework supplies a facility called ‘DomainObjects’. It supports transient object set-creation by processing an OCL query string at runtime. The result of the query is a set of objects matching the projection part of the query. This query expression would otherwise have to be put into the BaseElement attribute, which would harm readability of the notation significantly. In our case, CityGmlBuildings objects and IfcBuildings are created beforehand in a pre-processing step. The CityGmlBuildings object contains just as many Building objects as there are buildings in the CityGML model. The only reason the Building object holds the CityGML building ID is for subsequent joining with the equivalent IFC building object. The IfcBuildings object is also a condensed view of the IfcBuilding objects of the IFC source model. For the planned analysis we need to know more not about the building, but about its roof and the segments (slabs) being used for the roof. Even the segments are stripped-down versions of the original source, holding only a size and an orientation attribute. The size attribute has a constraint, prohibiting the creation of Segment objects if the size value is less than 2.0. The last attribute in the Segment component ensures that Segment objects are only created if the IDs of the building in the IFC and in the CityGML model match. Remember that we do this on the assumption that a generic way of transferring the IDs of one model type into the IDs of the other model type exists. If not, the DomainObjects facility could also handle explicit mapping via a manually created mapping table.
After the analysis model specification and the mapping specification have been completed, actual model data can be added to the analysis project of the framework. This is done using the framework management console by adding the respective model files to the analysis project. The project then contains all the data needed to create an instance of the analysis model. This can also be done through the framework management console which calls the respective interfaces of the framework. While the mapping specification itself is purely based on types, constraints work primarily on instance data. In this case, only roof segments with a size larger than the specified minimum value should be recognized and used in the analysis model.
The constraint expression can either contain constant values as stated above, or variables that need to be replaced with the respective values\(^2\). The creation of an analysis model as a pragmatic and condensed view of the whole problem field is an important part of the whole analysis. With this model the algorithm of the analysis can be less complex. In this case it degenerates to just two nested loops, summing up all the segment sizes of one building and the summary information for each building. This simple algorithm can be passed over to the framework directly and be processed on the current instance of the analysis model. This algorithm can naturally be further enhanced, taking into account as well the orientation of rooftop segments. The resulting document would then reflect the dependency between the daily movement of the sun and the energy input of the rooftop elements. This can be done by simply replacing the analysis algorithm in the framework management console, while keeping the analysis model untouched (since it already contains the normal vector of each rooftop segment in the Orientation attribute). The scope of this example encompasses multi-model analysis, but the analysis algorithm could be further enhanced to take the daily movements of the sun and shadow effects into account. For further information on the latter, refer to Alam, N. et al. (2012).
In order to conduct the same analysis on different cities, the city model and building model instances have to be exchanged and the analysis model created anew. The analysis algorithms can then be reused on the newly-created analysis model instance.
Variation of constrained values is another flexibility offered by the framework concept. When a constrained value is varied, the analysis model has to be regenerated. The analysis algorithm then operates once more on the updated model. The different results of constraint-based model analyses are stored for subsequent comparison.
\(^2\) For example, by using the DomainObjects facility.
*ITcon Vol. 20 (2015), Hartmann & von Both, pg. 272*
6. DISCUSSION AND CONCLUSION
In this paper we discussed a conceptual framework for analysis involving more than one product model type. The interface layer between the application data model and an external data model (e.g. IFC) provided by the approach presented here allows for any relationship between the source and destination object to be implemented inside this layer. New compositions and decompositions as shown in Figure 2 may be established based on the requirements of the analyses. Structural and functional relations between source objects and resulting objects can be implemented. A dominance of decompositions, as seen from a special analysis point of view, may be relaxed or dissolved.
Our examples show cross-model analysis using a CityGML model and IFC models. In developing the framework we found three aspects of premier importance for the structure and the manageability of analysis projects conducted with the framework: 1. The ability to generate ontologies appropriate to the analysis. Although this sounds commonplace, the underlying source models can implement decompositions, which might be considered disadvantageous from the viewpoint of the analysis. The interface can generate decompositions aligned to the requirements of analysis. 2. Standard models such as CityGML and the IFCs exhibit a significant amount of complexity. Objective measures for model complexity have been presented in a separate publication (Hartmann, Ulrich; von Both, Petra, 2010). Applying complexity measures shows that a significant reduction in the complexity exposed to the analysis algorithm is possible with the presented framework. The reduction of types (classes) available in the underlying model types to the extent necessary for a specific analysis is an obvious indicator for reduced complexity. In addition it has been shown that an inappropriate class design may cause added algorithmic complexity. Mapping existing class design of a source model to a class design advantageous for the analysis algorithm moves this complexity to the generated interface, thus freeing the algorithm itself from this task. 3. It has been shown that model principles, namely reduction, abstraction and pragmatism are shaped out differently in different model types because of their different intentions. Putting an analysis model on top of some source models can be seen as another modelling step in itself. During this modelling step, modelling principles can be applied as appropriate for the analysis. This helps modelling principles to take effect with regard to the analysis problem domain.
The approach presented here is not limited to cross-model analyses. Encapsulation of complexity and object model mapping could encourage the development of applications that previously shied away from the effort of using complex standard models.
7. FUTURE WORK
The framework concept that has been developed divides an analysis process into several interchangeable parts. Source models, analysis models, mapping specifications and algorithms are the components of this scenario. This componentized approach offers flexibility in several directions. Components can be separately enhanced or completely replaced, making reuse of components in several analyses possible. Single components expose significantly lower complexity than the combined whole. Separation into areas of expertise makes it possible to spread the distinct steps in the entire analysis process over several specific experts.
The concept of the creation of problem-specific models with streamlined complexity is not limited to analysis processes. It can also be useful in any environment where processes require condensed information that is otherwise scattered over several complex models or general data sources. The approach offers a systematic way of exposing the required pragmatic interface between the model and process in a dynamic and declarative way.
REFERENCES
ITcon Vol. 20 (2015), Hartmann & von Both, pg. 274
|
{"Source-Url": "https://www.itcon.org/papers/2015_17.content.08545.pdf", "len_cl100k_base": 14511, "olmocr-version": "0.1.51", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 72513, "total-output-tokens": 16683, "length": "2e13", "weborganizer": {"__label__adult": 0.0004367828369140625, "__label__art_design": 0.005626678466796875, "__label__crime_law": 0.0006732940673828125, "__label__education_jobs": 0.003566741943359375, "__label__entertainment": 0.00010716915130615234, "__label__fashion_beauty": 0.00029540061950683594, "__label__finance_business": 0.0010461807250976562, "__label__food_dining": 0.0004808902740478515, "__label__games": 0.000965595245361328, "__label__hardware": 0.0018186569213867188, "__label__health": 0.0005578994750976562, "__label__history": 0.001010894775390625, "__label__home_hobbies": 0.0006575584411621094, "__label__industrial": 0.0064697265625, "__label__literature": 0.00041365623474121094, "__label__politics": 0.00037288665771484375, "__label__religion": 0.0007796287536621094, "__label__science_tech": 0.247802734375, "__label__social_life": 0.00012958049774169922, "__label__software": 0.040313720703125, "__label__software_dev": 0.68408203125, "__label__sports_fitness": 0.0004153251647949219, "__label__transportation": 0.001373291015625, "__label__travel": 0.0003705024719238281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79821, 0.01757]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79821, 0.5081]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79821, 0.90485]], "google_gemma-3-12b-it_contains_pii": [[0, 4244, false], [4244, 9072, null], [9072, 11372, null], [11372, 14361, null], [14361, 20062, null], [20062, 25448, null], [25448, 29615, null], [29615, 33132, null], [33132, 36618, null], [36618, 40531, null], [40531, 45594, null], [45594, 48609, null], [48609, 52046, null], [52046, 54403, null], [54403, 57078, null], [57078, 60018, null], [60018, 62102, null], [62102, 65166, null], [65166, 67638, null], [67638, 72572, null], [72572, 76501, null], [76501, 79821, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4244, true], [4244, 9072, null], [9072, 11372, null], [11372, 14361, null], [14361, 20062, null], [20062, 25448, null], [25448, 29615, null], [29615, 33132, null], [33132, 36618, null], [36618, 40531, null], [40531, 45594, null], [45594, 48609, null], [48609, 52046, null], [52046, 54403, null], [54403, 57078, null], [57078, 60018, null], [60018, 62102, null], [62102, 65166, null], [65166, 67638, null], [67638, 72572, null], [72572, 76501, null], [76501, 79821, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 79821, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79821, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79821, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79821, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79821, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79821, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79821, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79821, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79821, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79821, null]], "pdf_page_numbers": [[0, 4244, 1], [4244, 9072, 2], [9072, 11372, 3], [11372, 14361, 4], [14361, 20062, 5], [20062, 25448, 6], [25448, 29615, 7], [29615, 33132, 8], [33132, 36618, 9], [36618, 40531, 10], [40531, 45594, 11], [45594, 48609, 12], [48609, 52046, 13], [52046, 54403, 14], [54403, 57078, 15], [57078, 60018, 16], [60018, 62102, 17], [62102, 65166, 18], [65166, 67638, 19], [67638, 72572, 20], [72572, 76501, 21], [76501, 79821, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79821, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
e7bce6f05ca30eee8138661cd7948bed10baa165
|
[REMOVED]
|
{"Source-Url": "http://www.lbd.dcc.ufmg.br/colecoes/jbcs/13/3/003.pdf", "len_cl100k_base": 9804, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 41878, "total-output-tokens": 12904, "length": "2e13", "weborganizer": {"__label__adult": 0.0006322860717773438, "__label__art_design": 0.0031147003173828125, "__label__crime_law": 0.0007867813110351562, "__label__education_jobs": 0.2044677734375, "__label__entertainment": 0.00040030479431152344, "__label__fashion_beauty": 0.0003986358642578125, "__label__finance_business": 0.028167724609375, "__label__food_dining": 0.0007638931274414062, "__label__games": 0.001499176025390625, "__label__hardware": 0.0014486312866210938, "__label__health": 0.001824378967285156, "__label__history": 0.000980377197265625, "__label__home_hobbies": 0.0005412101745605469, "__label__industrial": 0.0012712478637695312, "__label__literature": 0.0013256072998046875, "__label__politics": 0.00095367431640625, "__label__religion": 0.0008268356323242188, "__label__science_tech": 0.218505859375, "__label__social_life": 0.0017366409301757812, "__label__software": 0.0897216796875, "__label__software_dev": 0.43798828125, "__label__sports_fitness": 0.000637054443359375, "__label__transportation": 0.0010995864868164062, "__label__travel": 0.0007162094116210938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55184, 0.06477]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55184, 0.14212]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55184, 0.93112]], "google_gemma-3-12b-it_contains_pii": [[0, 2531, false], [2531, 5268, null], [5268, 10276, null], [10276, 15187, null], [15187, 17715, null], [17715, 23741, null], [23741, 28992, null], [28992, 33166, null], [33166, 35337, null], [35337, 38890, null], [38890, 44313, null], [44313, 48965, null], [48965, 54087, null], [54087, 55184, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2531, true], [2531, 5268, null], [5268, 10276, null], [10276, 15187, null], [15187, 17715, null], [17715, 23741, null], [23741, 28992, null], [28992, 33166, null], [33166, 35337, null], [35337, 38890, null], [38890, 44313, null], [44313, 48965, null], [48965, 54087, null], [54087, 55184, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55184, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55184, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55184, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55184, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55184, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55184, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55184, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55184, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55184, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55184, null]], "pdf_page_numbers": [[0, 2531, 1], [2531, 5268, 2], [5268, 10276, 3], [10276, 15187, 4], [15187, 17715, 5], [17715, 23741, 6], [23741, 28992, 7], [28992, 33166, 8], [33166, 35337, 9], [35337, 38890, 10], [38890, 44313, 11], [44313, 48965, 12], [48965, 54087, 13], [54087, 55184, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55184, 0.1245]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
ab66c6e5e5a2053340006163998e6b59559be4a0
|
Abstract—Native functional-style querying extensions for programming languages (e.g., LINQ or Java 8 streams) are widely considered as declarative. However, their very limited degree of optimisation when dealing with local collection processing contradicts this statement. We show that developers constructing complex LINQ queries or combining queries expose themselves to the risk of severe performance deterioration. For an inexperienced programmer, a way of getting an appropriate query form can be too complicated. Also, a manual query transformation is justified by the need of improving performance, but achieved at the expense of reflecting an actual business goal. As a result, benefits from a declarative form and an increased level of abstraction are lost.
In this paper, we claim that moving of selected methods for automated optimisation elaborated for declarative query languages to the level of imperative programming languages is possible and desired. We propose an optimisation method for collection-processing constructs based on higher-order functions through factoring out of free expressions in order to avoid unnecessary multiple calculations. We have implemented and verified this idea as a simple proof-of-concept LINQ optimiser library.
I. INTRODUCTION
SINCE the release of LINQ (Language-Integrated Query) for the Microsoft .NET platform in 2007, there has been a significant progress in the topic of extending programming languages with native querying capabilities [1]. Programming languages are mostly imperative; their semantics relies on the program stack concept. They operate on volatile data and the meaning of collections is rather secondary. On the other hand, query languages are usually declarative and their semantics often bases on some forms of algebras or logics; these languages operate mostly on collections of persistent data. Declarativity of a query language reveals itself mostly when considering operators for collections. In the case of an imperative language, operating on a collection takes a form of an explicit loop iterating over collection elements in a specified order, while in query languages one declares a desired result (e.g., a sub-collection containing elements of a base collection matching a given selection predicate) and an algorithm of filtration itself is not an element of an expression representing the query. Based on characteristics of data structures, a database state and existence of additional auxiliary structures (e.g., indices), an execution environment can choose the most promising algorithm (a plan) for evaluation of the query. Declarativity allows one to postpone selection of an algorithm even to the moment of an actual query execution.
In this paper we discuss to what extent solutions for processing of collections within programming languages are actually declarative. To do so, we made an extensive research on query optimisation. In databases it is a crucial process that allows a programmer to be relieved from thinking about details of a processing control flow, auxiliary data structures and algorithms.
LINQ seems to be the most robust solution introducing a promise of declarative collection processing within an imperative programming environment. It is commonly used for direct processing of collections and as a mapper to resources devoid of a robust declarative query API or query optimisation. When encountering performance issues, developers are forced to manually optimise LINQ expressions or partly resign from declarative constructs in favour of an imperative code.
Consider a LINQ query expression in Listing 1 (the database diagram including the Products table is available at http://northwinddatabase.codeplex.com/) whose purpose is to find names of products with a unit price equal to a price of a product (or products) named Ikura. If the query addresses a native collection of objects, its execution is severely inefficient as the nested subquery, searching for prices of products named Ikura, is unnecessarily evaluated for each product addressed by the outer query. Although this task could be resolved in a time linearly proportional to the collection’s cardinality, the LINQ engine induces an outer loop and a nested loop, both iterating over the products’ collection. Using this example, in further sections we show that manual optimisation of complex LINQ queries is not an easy task.
LINQ enables to express the same goal in many different ways. However, evaluation times of two semantically equivalent queries may differ by several orders of magnitude. In particular, in the context of the LINQ query
\[
\begin{align*}
\text{var ikuraQuery =} \\
\text{from } p \text{ in products} \\
\text{where } \{ \\
\text{from } p2 \text{ in products} \\
\text{where } p2.productName == "Ikura" \\
\text{select } p2.unitPrice \} \text{.Contains}(p.unitPrice) \\
\text{select } p.productName;
\end{align*}
\]
Listing 1. Example 1 – query expression syntax.
expressions’ declarative syntax, it violates the declarative programming principle. Without knowledge on how a query engine works in a context of given data, the optimisation process is too complex and time-consuming. This is particularly true if a programmer wants to preserve semantics and properties of his query construct.
To the best of our knowledge, the problem of automated global optimisation of LINQ queries for direct processing of collections of objects has not been addressed in the literature so far. By global optimisation we understand the ability to define an efficient query execution plan based on the whole query structure as opposed to the local optimisation that usually only targets a single operator. Below we prove that global optimisation can be done automatically making LINQ genuinely declarative.
Nonetheless, the problem that this paper deals with is not limited to LINQ. Surprisingly, it extends to dozens of programming environments that support functional-style operations on collections of elements, such as filter, map or reduce. Pipelines and streams introduced in Java 8 are a solution equivalent to LINQ to Objects [2]. The main difference lies in the naming convention of new operators corresponding to their functional prototypes (e.g., map and filter instead of LINQ’s Select and Where). Furthermore, list comprehension constructs are examples of a shorthand syntax for specifying operations of projection and selection (filtering). Consequently, discussed issues concern many imperative languages exploiting this feature (e.g., Python). Fowler summarises such a functional-style programming pattern using a term collection pipeline [3]. Examples given in LINQ can be expressed in many imperative and functional programming languages. While we extend the conclusions of our work to the universe of imperative programming, they do not directly apply to functional languages (e.g., Haskell) since their principles of program evaluation are significantly different [4].
The rest of the paper is organised in the following way. First, we present a brief description of the state of the art followed by characteristics of language-integrated query constructs. Next, we describe issues with nested independent subqueries and free expressions revealing a huge optimisation potential. Finally, we present our solution followed by measured results and principles of our optimisation approach, being the core of the paper. The paper is concluded with a short summary.
II. RELATED WORK AND THE STATE-OF-THE-ART
Databases are the area of the computer science where declarative programming and query optimisation have developed extensively. Over 40 years of the research on relational systems resulted in various optimisation techniques [5][6] and numerous solutions are incorporated in available commercial products. Our research presented in the paper particularly addresses query optimisations analogous to query unnesting, dating back to the early 80s [7]. This topic is constantly appearing in the context of arising database technologies. Different approaches to handle nested queries evaluation have been proposed for object-oriented databases [8][9] and XML document-based stores [10]. However, NoSQL solutions marginalise the topic of query languages and usually rely on a minimalistic programming interface and domain-specific optimisations, mostly implemented by high redundancies and storing data in the form matching assumed queries. Most attention from the scientific community concentrates on the topic of distributed data-parallel computing using the Map/Reduce paradigm (like Hadoop or Dryad for Azure). This paradigm can be transparently used in declarative collection processing. The Dryad programming environment based on LINQ [11] takes advantage of mechanisms similar to Map/Reduce in order to write scalable, parallel and distributed programs. To increase sharing of computations in a data centre, Dryad can benefit from the Nectar system [12]. It is able to cache results of frequently used queries and incrementally update them. The use of cached results is achieved through automatic query rewriting. Robust query and program optimisations have been developed for solutions based on the functional paradigm. According to Fegaras [13], an optimisation framework for a functional lambda-DB object-oriented database relies on mathematical bases, i.e. the monoid comprehensions calculus. It generalises many unnesting techniques proposed in the literature.
Glasgow Haskell Compiler (GHC) for the Haskell non-strict purely functional language introduces many methods based on code rewriting. They range from relatively simple rules that can be used to improve efficiency of programs through modifications on a high syntactic level to more complex low-level core language transformations (e.g., let-floating, beta reduction, case swapping, case elimination) [14]. In particular, a procedure called full laziness (or fully lazy lambda lifting) has been proposed to avoid reevaluation of inner expressions for which result could be pre-calculated only once [15][16].
Currently, due to introduction of lambda abstractions into object-oriented languages, functional style of programming became ubiquitous. Stream and collection processing constructs derived from functional languages can be naturally evaluated in parallel using multiple processor cores. Therefore, the most popular solutions, like Java 8 streams, LINQ or ScalaBlitz, enable such optimisation through various libraries or frameworks [17].
In the field of functional-style queries integrated into a programming language, the topic of query optimisation seems the most advanced in LINQ. A LINQ provider library can implement direct processing of data (e.g., LINQ to Objects, LINQ to XML) or delegate processing to a remote external resource by sending a translated query (e.g., LINQ to SQL, LINQ to Entities). To be precise, a mixture of both approaches can be used, e.g. when the query language of a remote resource cannot completely express the semantics of a LINQ query. In the case when LINQ sends a translated query, it also delegates the responsibility for query optimisation. Consequently, if the external resource engine provides optimisation, developers can fully rely on a declarative style of programming. However, in the context
of LINQ to SQL, the problem of analysing and normalising of LINQ queries in order to provide minimal and cohesive mapping to SQL has drawn attention of the scientific community. This is caused mostly by some drawbacks of the original Microsoft’s solution that in some cases may fail or produce a so-called “query avalanche” [18][19].
The issue of performance deficiencies while processing collections of objects has not passed unnoticed by the LINQ community. In order to cope with the shortage in optimisation comparing to database engines, the i4o project (abbr. index for objects) solution adapted the idea of indexing to native objects’ collections [20]. It is implemented as an alternative for the LINQ to Objects provider library. Utilising the concept of secondary access structures, i4o can produce several orders of magnitude of a gain in performance for queries filtering data at the cost of a data modification overhead.
Another examples of LINQ query optimisation tools are Steno [21] and LinqOptimizer [22] provider libraries. Their authors focused on a significant performance deficiency of LINQ queries in contrast to the equivalent manually optimised code that can be several times faster. Experiments have shown that Steno allows one to obtain up to 14-fold increase in processing of sequential data and 2-fold comparing to a problem processed by the DryadLINQ distributed engine [11]. The main idea behind Steno is to eliminate the overhead introduced by virtual calls to iterators that are the fundamental mechanism used by the LINQ engine. This problem has been solved by automatic generation of an imperative code omitting iterators. The optimisation addresses mainly implementation of individual operators. This also concerns the case of nested loops’ optimisation when Steno has to analyse a series of operators only to preserve the order of iteration induced by the LINQ to Objects library implementation. This is justified by the loop fusion efficiency and consideration of side effects that are allowed in LINQ. Steno is also capable of higher-level optimisation giving an example of the GroupBy-Aggregate optimisation. It involves a local term rewriting, addressing a pair of neighbouring operators, i.e. GroupBy followed by Aggregate. When encountering such a sequence of operators, Steno replaces it by a dedicated GroupByAggregate operator that saves memory by storing per-key partial aggregates instead of the whole collection of group values. This optimisation takes advantage of LINQ declarativity by changing the course of evaluation. As a result, introducing side effects would cause its incorrectness. Being aware of a difficulty of automatic reasoning about side effects within queries, Steno’s authors suggest developer-guided optimisation. Optimisation similar to GroupBy-Aggregate is considered in the SkyLinq project [23] that develops an alternative operator called Top. This operator can be used to substitute a sequence of OrderBy and Take method calls (i.e. an operation to get top k elements). The significance of LINQ grew up with introducing LINQ to Events, an extension enabling declarative programming according to the reactive paradigm [24]. The solution derives from Functional Reactive Programming and is well suited for composing asynchronous and event-based programs [25]. Recently, this approach has attracted attention of commercial and scientific communities and, as a programming paradigm, faces efficiency issues indicating possible areas for optimisation [26][27].
Other current research on LINQ strives to allow seamless integration of heterogeneous data sources [28]. As a result, users can transparently process and modify data shared among contributing resources. Because of complex multilayer architecture, such an environment is not efficiency-oriented. LINQ is generally focused on local optimisation performed at a data source layer. In processing of heterogeneous and distributed data, it is unlikely that such optimisation is provided by each contributing resource. Therefore, it raises a need for global optimisation performed at the level of a LINQ query itself.
Declarative functional-style constructs in general-purpose object-oriented languages are not pure. As a result, decisions concerning optimisation have to be made by programmers. Transparent and aggressive compile-time optimisations can be achieved by introducing a query language extension into a programming language compiler [29].
One of numerous examples of extending compilers of existing languages with declarative constructs is SBQL4J [30]. It enables seamless integration of SBQL queries with language instructions and executing them in a context of Java collections. SBQL4J is based on the Stack-Based Architecture (SBA) approach instead of the functional approach and offers capabilities comparable to the LINQ technology [31][32]. What distinguishes it from other programming language-integrated queries is incorporation of several automatic optimisation methods developed for SBA. One of these methods, i.e. factoring out independent subqueries [8], enables SBQL4J to cope with optimisation of queries equivalent to examples discussed in this paper. It belongs to the group of optimisation methods that are based on query rewriting. Factoring out concerns a subquery (that in SBA represent any subexpression) that is processed many times in loops implied by so called non-algebraic operators despite that in subsequent loop cycles its result is the same. In SBQL4J rewriting is applied at a compile-time and a resulting performance improvement can be very significant, sometimes giving query response times shorter by several orders of magnitude.
III. CHARACTERISTICS OF LANGUAGE-INTEGRATED QUERY CONSTRUCTS
Declarative style programming (especially in the context of databases) is often associated with the select-from-where syntactic sugar known from SQL that was adapted into LINQ. The query in Listing 1 is expressed using the LINQ query expression syntax. That form lacks explicit information on an order of performed operations and virtually a compiler could translate it to any semantically equivalent lower-level code that could be considered a query execution plan. Consequently, programmers must be particularly careful about potential side effects within
declarative constructs in order to avoid the risk of unpredicted violations. Technically, query expressions are syntactic sugar over the implementation layer using lambda expressions, higher-order functions and, so called, extension methods [33]. An executable query, after removing the LINQ syntax sugar, will take the form presented in Listing 2.
```csharp
var ikuraQuery = products.Where(p => p.productName == "Ikura")
.Select(p => p2 => p2.unitPrice)
.Contains(p => products.
Select(p => p2 => p2.unitPrice));
Listing 2. Example 1 – de-sugared.
```
The translated query uses the traditional, non-declarative object-oriented programming syntax. When processing collections or XML documents directly, the most crucial LINQ library extension methods (e.g., Select and Where) expose iterators that perform a specified operation on elements of a given collection. Lambda expressions are used to express details concerning such an operation, e.g. the selection predicate for the Where operator. Despite of similarity of Listing 2 to the original query expression, such composition of method calls on the products collection determines the order of evaluation.
Due to the specific implementation based on iterators and lambda abstractions, the execution strategy of LINQ queries is deferred. Execution is performed in presence of functions or instructions forcing iteration over elements specified by a query. However, a result of an iteration is not saved or cached, so each execution reevaluates a query against a given (current) data state.
The approach used in the LINQ to the objects’ library implementation is generally ubiquitous (however, not uniform) in numerous programming languages (e.g., Python, Java 8, Elixir, Ruby) [3]. A good summary describing the possible set of properties of functional-style constructs can be found in the documentation of Java 8 streams [2].
The functional nature makes a language construct a good candidate for optimisation due to intelligible querying. All operations in a query processing chain produce a new queryable result instead of modifying original data; hence they do not introduce side effects. For example, during filtration on a list, no element is actually removed. Even though filter and map (common functional-style operators) are often used to directly process elements of a local in-memory collection, in reality elements can be obtained one by one from any so called queryable data source, e.g., a data structure, a generator function, an iterator, an I/O channel and a chained pipeline of collection operations. Such generality allows, usually time-consuming, querying of remote data sources, additionally making optimisation desirable.
The above properties are common in implementations of language-integrated query mechanisms. However a programmer must be sensitive to possible differences in various programming languages. For example, some languages implement consumable evaluation of queries. In such a strategy, elements of a queryable data source instance can be visited only once during its life. As a result, each query instance can be evaluated only once. The last property is present in Java 8 streams whereas LINQ operators are not consumable. The laziness-seeking property has the most profound impact on evaluation and semantics of language-integrated queries. It is connected with the lazy evaluation strategy assuming that a next element is returned for further processing only if necessary. Usually, a place of a lazy construct definition does not determine an actual moment of query execution (i.e. deferred execution strategy). Actual query execution occurs when its result is required, for example elements referred by a query are iterated or counted. Operations like selection, projection, and removal of duplicates are often implemented lazily. Consequently, to ensure coherence execution of eager constructs (e.g., grouping or ordering) is also deferred. Lazy evaluation usually results in better performance. It is cache-friendly since an element is processed by a chain of collection pipeline operations before proceeding to the next element. Moreover, in cases when the desired query result has been reached before visiting all elements it is not necessary to continue iterating (e.g., a query finding the first product with name “Ikura”).
In the context of query optimisation, it is important to preserve properties of optimised constructs. In a general case, any change in this matter can affect semantics of application code. Switching an expression evaluation strategy from lazy to eager or forcing immediate execution can have serious consequences. Only laziness-seeking constructs can deal with possibly unbounded data sources. Eager evaluation of selection and projection operators on an infinite data source would require infinite computational resources and time, while lazy evaluation can return partial results.
In the next section, we show that preserving deferred execution, which is implied by the lazy evaluation strategy, is the factor impeding query optimisation.
IV. PERFORMANCE PITFALLS
A. Evaluation of Independent Subqueries
Analysing the expression from Listing 2, it becomes obvious that the nested query selecting products named Ikura will be executed multiple times, since it is a part of a lambda abstraction (specifying a selection predicate) called against each product (the external Where operator induces a loop iterating over elements of products collection). This form is not efficient and makes the computational complexity quadratic (i.e. $O(n^2)$). However, searching for products named Ikura is independent of the parent query and could be evaluated just once. In order to improve query performance, a programmer must transform it. A natural way for optimisation seems to be factoring out the problematic subquery to a separate instruction and assigning it to a new variable (see Listing 3). The changes could also be presented on the LINQ query expression, but because the form with extension methods is actually executed, it will be a basis for this study.
B. Factoring Out Constructs Executed Immediately
Although LINQ queries execution is deferred, an execution strategy of some expressions comprising LINQ queries can be immediate. Such expressions are evaluated locally in the place of the definition. Some operator, like aggregate functions returning a single value instead of a queryable data source, force immediate execution of a query. The query in Listing 5 contains such an expression determining the greatest unit price in the products’ collection. However, it will not be evaluated until the execution of the $maxQuery$ since it is contained in a lambda expression defining a selection predicate for the $Where$ method.
```
var maxQuery = products. Max(p => p.unitPrice). Select(p => p.productName);
Listing 5. Example 2 – extension methods syntax (quadratic computational complexity).
```
Similarly to the subquery determining the price of the Ikura product, it should be evaluated only once during execution of the $maxQuery$ and therefore needs to be factored out. Let us call such constructs free expressions. Using the same procedure as presented in the previous section, we break the query into two instructions and perform immediate execution (see Listing 6).
```
The $ToList$ operation does not need to be applied to the expression defining $maxPrice$ (actually, it cannot be applied because it returns a value), due to its inherent immediate execution.
```
C. Consequences of Changing the Evaluation Order
There exist some subtle consequences concerning evaluation after the manual optimisation. In the original forms of example queries (Listing 2 and Listing 5), nested expressions would be evaluated only if the products’ collection is not empty. In the optimised forms (Listing 4 and Listing 6) it will be unnecessarily evaluated also when the collection is empty. This is particularly important for performance when a nested query operates on a collection different than the external query does. The current example concerns just one collection, but it is easy to imagine a situation when collections are distinct (e.g., products from other shops kept in separate collections). In extreme cases, if calculation of a factored out expression is time-consuming, this can worsen overall query performance. Aside from
```
var nestedQuery = products.
Where(p2 => p2.productName == "Ikura").
Select(p2 => p2.unitPrice);
var ikuraQuery = products.
Select(p => nestedQuery.Contains(p.unitPrice)).
Select(p => p.productName);
Listing 3. Example 1 – loops in separate instructions.
```
A result of the transformation shown in Listing 3 may seem effective; however, the expected goal will not be achieved. The problem lays in the execution strategy of LINQ queries. The $nestedQuery$ variable holds an instance of a non-executed query that will be evaluated – like in the case of the non-transformed expression (Listing 2) – at every traversal of the loop induced by the $ikuraQuery$ $Where$ operator.
In Java 8 streams proper execution of corresponding queries generally would become impossible due to the consumable property of streams. After the transformation, the selection predicate of the $ikuraQuery$ would share the same instance of a $nestedQuery$ stream. Evaluation of the nested query would be performed only once, at the first traversal of the loop induced by the $ikuraQuery$ $Where$ operator, whereas the following iterations would result in terminating query evaluation and throwing an exception.
Solving the above problems requires eliminating deferred execution of the nested query. There exist several techniques to force immediate execution of a LINQ query. For example, the $ToList$ method returns a list containing a materialised query result. Applying it to the nested query makes the solution more efficient (linear computational complexity) than the query in Listing 2. However, a part of the original query is executed and the other part remains deferred to the moment of an actual demand. It is possible that data in a collection may change between creation of a query and its evaluation. The original query form (and the programmer’s intention) is unsuitable to it – the query is always completely executed on a current data state. After immediate execution of the nested query one cannot be sure about it – the $ikuraQuery$ can be evaluated when data needed for calculating the $nestedQuery$ subquery got already modified. As a result of the transformation, there occurred a change of the query semantics that is difficult to detect by a programmer or tests. Ultimately, a programmer is forced to resign from deferred execution of the whole query, which is shown in Listing 4.
```
var nestedQuery = products.
Where(p2 => p2.productName == "Ikura").
Select(p2 => p2.productName).ToList();
var ikuraQuery = products.
Select(p => nestedQuery.Contains(p.unitPrice)).
Select(p => p.productName).ToList();
Listing 4. Example 1 – fully immediate execution.
```
Due to explicit materialisation, reusing the optimised query against a different data state becomes troublesome. For an inexperienced programmer, a way of getting an appropriate query form can be too complicated. Without deeper knowledge on the LINQ internal semantics in a context of object data, obtaining an optimal structure of code is a tricky, time-consuming and error-prone task. The example shows a lack of real independence of LINQ from a type of a data source. Despite the fact that LINQ allows unified processing on various types and sources of data, an actual execution plan relies on them. It seems that the basis for elaborating this layer of the language was mostly the integration of the object-relational mapping with a type system of a programming language (what also shows at the level of the LINQ query expression syntax and implementing providers [30]).
performance issues, the transformation presented in previous sections can have dangerous impact on query semantics. In the second example (Listing 5) in the case of an empty collection of products, the selection predicate is not evaluated at all and the final result is simply an empty collection of product names. After optimisation (Listing 6), the expression \texttt{products.Max(p2 => p2.unitPrice)} is always evaluated at the beginning. The \texttt{Max} method applied to an empty collection throws an exception. Consequently, the behaviour of the optimised query is unsafe and inconsistent with the intent of a programmer.
To make optimisation immune to the described risk, the original order of evaluation should be restored. This could be achieved by applying the lazy loading pattern to the free expression determining \texttt{maxPrice}. In Listing 7 we introduce an improved transformation.
```csharp
var maxPriceThunk =
new Lazy<Double>(() =>
products.Select(p => p.productName).ToList();
);
Listing 7. Example 2 – immediate execution (linear computational complexity).
```
A \texttt{Lazy} class instance is a simple thunk – an object in memory representing an unevaluated (suspended) computation, used in the call-by-need evaluation strategy. The argument of the \texttt{Lazy} constructor specifies a function that should be evaluated at most once, only if its value is requested for the first time. The request is signalled by accessing the \texttt{Value} property of the \texttt{Lazy} instance. Consequently, the original order of the query and the free expression evaluation are restored (except the free expression being processed at most once) making the optimisation semantically safe. It is achieved at the expense of overhead concerning access to the \texttt{Value} property.
In the next sections we present a general approach to optimisation that preserves semantics and characteristics of an original query while reducing its computational complexity.
V. FACTORING OUT FREE EXPRESSIONS
The solutions presented for both examples share a common shortcoming; they do not preserve the deferred execution property. Our main aim is to propose a general query rewriting rule overcoming the problem. In order to keep the solution generic, additional constraints have been assumed: (1) transformation should not break a query into separate instructions (in contrary to what is shown, for example in Listing 7), (2) we express the rules in general terms rather than LINQ specific ones (e.g., operators common in functional programming). Obviously, we assume also that the intent of a programmer is simply to query and not to introduce side effects deliberately.
Generalisation of the factoring out procedure should take into account queries more complex than presented above. A nesting level of a lambda expression in the examples presented in Listing 2 and Listing 5 is shallow, but conceptually it can be arbitrary with no need to modify the factoring out procedure. Free expressions can be bound either globally, i.e. to an environment independent from a query, or to a lambda expression at any nesting level lower than the lambda expression containing a free expression. The examples presented in Listing 2 and Listing 5 concern the former case. A generalising solution to the latter case can be achieved by treating any subquery as a separate query and the rest as a global environment.
The basic idea behind the transformation is, first, to identify free expressions that could be evaluated before a loop induced by an operator containing them, and next, to apply an appropriate rewriting rule. This is generalisation of the standard procedure called loop-invariant code motion known from the compilation theory [34]. An example of incorporating this idea to programming language-integrated queries can be found in the Stack-Based Architecture theory [8]. To optimise evaluation in functional languages, a similar procedure of fully lazy lambda lifting (called also full laziness) has been also proposed [16]. In both cases rewriting rules are straightforward and make use only of the basic set of language operators. Our attempt to generalise, in a similar manner, factoring out free expressions within LINQ queries using only methods supplied by LINQ has been unsuccessful. In particular, LINQ operators in presence of a queryable data source (e.g., a collection) cause iteration over elements, whereas factoring out requires treating empty, single or multiple elements as an individual result cached for reuse in further calculation.
A. Formalising Optimisation
The procedure of factoring out free expressions can be applied to the following query pattern:
\[
\text{queryUnoptimized} ::= \text{queryExpr}(\lambda(\text{freeExpr}))
\]
where \text{queryExpr} denotes a query expression that includes a nested lambda abstraction \(\lambda(\text{freeExpr})\) containing \text{freeExpr} that is free from any lambda abstraction within the query. Additionally, we assume that \text{freeExpr} should be evaluated several times during the execution in order to make factoring out profitable. This pattern is not restricted to a whole query. It can match any subquery.
The solution requires introducing transformation of factoring out a free expression before a loop using it and applying the lazy loading evaluation strategy. Several aspects need to be addressed to make such optimisation effective and general. (1) In imperative programming languages deferring execution is often achieved through enclosing code in a function or by introducing an iterator. In both cases, repeated execution (e.g., inside a loop implied by map or filter collection pipeline operators) causes repeated evaluation. If this applies to a factored out expression, then it is usually necessary to force materialisation of its result before entering a loop using it. (2) Moreover, materialisation solves the issue with factoring out consumable data sources since they cannot be evaluated more than once. Before factoring out, the problem does not exist since such constructs reside inside lambda abstractions (that are parameters of collection pipeline operators) and therefore are evaluated only once during single lambda call evaluation. (3) As stated earlier, it is possible that a free
expression is skipped during evaluation of the original query. In a general case it is safe to preserve an order of evaluation by suspending materialisation of a factored out expression and prevent its immediate execution before entering a loop using it. (4) An instance of a mechanism used for suspending materialisation of a factored out expression should not be shared between query executions. To solve this problem, it can be additionally enclosed in a lambda abstraction. Otherwise, following executions would share a cached result determined during the first execution. (5) In order to prevent collection pipeline operators from iterating over a collection, it has to be nested into a new collection as a single element. The same procedure can be applied to a single result to enable usage of collection pipeline operators.
Let us denote the following abstract operations:
- **Collection**(arg) – creates and returns a collection consisting of one element specified by an argument, e.g., if an argument is a collection it returns a nested collection.
- **Immediate**(expr) – evaluates and materialises a result of an expression passed as an argument (except when the expr execution strategy is already immediate), `Suspend(lambda)` – returns an instance of a mechanism for lazy loading of an expression specified by a lambda abstraction passed as an argument, `Value(lazy)` – returns a lazily initialised value stored by a lazy loading mechanism instance specified by an argument.
Taking advantage of above operations, we introduce the following rewriting procedure:
```
queryOptimized ::= Collection(() => Suspend(() => Immediate(freeExpr)))).
map(lambdaParam => lambdaParam()).
map(freeExprThunk => queryExpr(
lambda(freeExprThunk)).flatten()).
```
where \(\lambda(freeExprThunk)\) is a nested lambda abstraction \(\lambda(freeExpr)\) with an occurrence of `freeExpr` expression substituted by `Value(freeExprThunk)`. This form ensures that execution of all components of the original query is deferred assuming that collection pipeline operators map and flatten have such an execution strategy.
The first part of the rewritten query
```
Collection(() => Suspend(() => Immediate(freeExpr))))
```
creates a collection consisting of a single element that is a lambda function creating an instance of a mechanism for suspended materialisation of the factored out free expression. The following map operator ensures execution of the lambda function. As a result, the following map operator will process
```
queryExpr(lambda(freeExprThunk))
```
expression only once for `freeExprThunk` assigned the lazily loaded cached value of `freeExpr`. Therefore, the result of evaluation of `queryExpr(lambda(freeExprThunk))` is equal to the result of evaluation of `queryExpr(freeExpr)`. The flatten operator eliminates an outer collection implied by the `Collection` operator. Consequently, the final result of the optimised query is taken from evaluation of the `queryExpr(lambda(freeExprThunk))` expression.
### B. Implementing Optimisation in C#
In C#, to simplify optimisation we introduce an auxiliary method `AsGroup` to take care of the `Collection` operation and suspended evaluation of a lambda expression returning a materialised value of the free expression. Listing 8 shows the implementation of the auxiliary operator.
```csharp
static IEnumerable<TSource> AsGroup<TSource>(Func<TSource> sourceFunc) {
yield return new Lazy<TSource>(() => sourceFunc);
}
```
The `Suspend` operation is achieved by a Lazy class constructor `new Lazy<TSource>(sourceFunc)`. The `yield` return statement is a syntax sugar enabling creating a collection available through an iterator deferring any computations until iteration starts. In this way, a programmer avoids using a concrete type of a collection and enables a compiler to choose the best implementation on its own. `AsGroup` exposes an iterator that returns only one element, i.e., an instance of a mechanism for suspended materialisation of the factored out free expression. It is created directly before yielding replacing the projection `map(lambdaParam => lambdaParam())`. Consequently, the rewritten query in case of LINQ takes the following form:
```
LINQ-deferredQueryOptimized ::= AsGroup(() => Immediate(freeExpr)).
SelectMany(freeExprThunk => queryExpr(freeExprThunk.Value)).
```
where the `SelectMany` LINQ operator substitutes map and flatten and `freeExprThunk.Value` realises the `Value(freeExprThunk)` operation.
The above transformation can be adapted to a situation when `queryExpr(lambda(freeExprThunk.Value))` is a construct executed immediately (e.g., when it returns a single value). In that case `SelectMany` needs to be replaced with two operations: Select realising projection and `First` responsible for flattening and immediate execution:
```
LINQ-immediateExpressionOptimized ::= AsGroup(() => Immediate(freeExpr)).
Select(freeExprThunk => queryExpr(freeExprThunk.Value)).
First()
```
The `Immediate` operation is required only in the case when `freeExpr` is a LINQ query deferred in execution. Explicit materialisation can be achieved using LINQ specific methods, e.g., `freeExpr.ToList()`. The transformation constitutes the general rewriting rule for optimisation of LINQ queries through factoring out free expressions. Applying it to the examples from Listing 2 and Listing 5 is shown in Listing 9 and Listing 10, respectively.
```csharp
var ikuQuery = AsGroup(() => products.
Where(p => p.productName == "Iku*")..
Select(p => p.productPrice).ToList()).
.SelectMany(ikuPriceThunk => products.
Where(p => ikuPriceThunk.Value).
Select(p => p.productName));
```
Listing 9. Example 1 – after factoring out suspended free expressions optimisation.
The queries execution strategy after optimisation remains deferred and in the case of the second example (Listing 10), the problem of the exception while addressing an empty products’ collection does not occur.
**VI. Performance Tests**
We have evaluated the impact of factoring out of free expressions optimisation in C# by applying it manually to a number of problems: *samePriceAs* – given a collection of products, find products with the same price as the product specified by a name, *maxPrice* – given a collection of products, find products with the maximal price in the collection, *promoProducts* – given a collection of products, find names of products in the imaginary sale promotion, i.e. exactly $k$ times more expensive than any other product, and *pythagoreanTriples* – from natural numbers between 1 and $n$ find a number of triples satisfying the Pythagorean theorem.
In experimental tests, the collection of products ranged from 1 to 1,000,000 elements. The size of each product averaged to 175 bytes. Tests for *samePriceAs*, *maxPrice*, *promoProducts* and *pythagoreanTriples* problems have been conducted using queries in Listing 2, Listing 5, Listing 11, and Listing 12 accordingly. The problems have been solved relatively simply and each one has at least one free expression suitable for the factoring-out optimisation. Solutions to *samePriceAs* and *maxPrice* have free nested queries, whereas *promoProducts* and *pythagoreanTriples* introduce simple mathematical calculations that can be factored out. The tests include comparison with PLINQ and LinqOptimizer optimisation framework. We also combine them manually with our optimisation to explore limits and further opportunities.
```csharp
var promoProducts = products.Where(p => products.Any(p2 => p2.unitPrice == Math.Round(p.unitPrice / 1.2, 2))).Select(p => p.productName);
Listing 11. A query concerning the *promoProducts* problem before optimisation.
```
```csharp
var pythagoreanTriples = Enumerable.Range(1, max + 1).SelectMany(a => Enumerable.Range(a, max + 1 - a).SelectMany(b => Enumerable.Range(b, max + 1 - b).Where(c => a * a + b * b == c * c))).Count();
Listing 12. A query concerning the *pythagoreanTriples* problem before optimisation.
```
We conducted our experiments on a workstation with a 4-core Intel Core i7 4790 3.6 GHz processor, 32 GB of DDR3 1600MHz RAM, hosting Windows Server 2012 R2. Benchmarks have been compiled for a x64 platform with enabled code optimisations using target .NET Framework v. 4.5. Tests results for following problems are presented in Fig. 1, Fig. 2, Fig. 3 and Fig. 4.
The LinqOptimizer is used in two variants: sequential (denoted by SEQ) and parallel (denoted by PAR). The latter competes with PLINQ. Each query before and after factoring-out optimisation has been subjected to three
further optimisation variants, i.e. PLINQ, LinqOptimizer sequential or parallel variant. The tests focus on query execution times and omit optimisation and compilation of a query. Most of the plots use logarithmic scales to more clearly reveal differences in performance for various collection sizes. To improve readability, the plots omit optimisation variants that are generally worse. In particular, the sequential variant of LinqOptimizer is shown only if it improved query performance in any collection size range, and the better alternative between PLINQ and parallel variant of LinqOptimizer is selected.
Results of the tests are as follows:
- Tests’ results are consistent with an expected computational complexity. In `samePriceAs` and `maxPrice` problems it has been reduced from quadratic to linear, achieving a gain in orders of magnitude for large collections, e.g., in the case of the second example (Listing 5 and Listing 10) the query after factoring out is more than 30,000 times faster for 100,000 products (boost from ~115 s to ~3.8 ms).
- Except for the `pythagoreanTriples` problem, the profitability threshold of individual factoring-out optimisation is very low when comparing to PLINQ and LinqOptimizer. Even for a collection of 2 objects, optimised queries can work faster than original ones (e.g. `samePriceAs` and `maxPrice`).
- The performance penalty in the case of a collection consisting of a single element is at most 0.6 μs which corresponds to a ~60% deterioration (the `pythagoreanTriples` problem).
- When processing large collections, the factoring-out transformation can give several times better performance by taking advantage of PLINQ (especially in the case of the `promoProducts` problem). For smaller collections, PLINQ imposes overhead significantly greater than factoring out.
- The `pythagoreanTriples` problem optimisation tests show that it may be difficult to obtain a significant gain when factoring out a simple expression (i.e. a \(* a + b * b\)). A ~3% gain is achieved for n equal to 10,000. The LinqOptimizer framework seems to be designed for optimising queries involving numbers rather than complex objects. Only in the `pythagoreanTriples` problem optimisation, it outperforms both PLINQ and factoring out.
- In general, combining factoring out of free expressions with LinqOptimizer is not likely to produce the best solution. However, it seems that tuning of the LinqOptimizer algorithm should be possible. In the `pythagoreanTriples` problem, PLINQ is able to produce more efficient query after factoring out, whereas LinqOptimizer favours the original query. Unfortunately, the differences are too small to be seen on the plot.
C# libraries offer a `Lazy` class realising the Suspend operation, but considering performance, we have implemented our own lightweight version. We have experimented with different variants of performing `Collection`, `Suspend` and `Immediate` operations but the presented solutions generally resulted in performance better than others.
VII. AUTOMATIC OPTIMISATION
A. Free Expression Detection
The transformation is justified by the need to increase effectiveness, which is achieved at the expense of reflecting the business goal. As a result, benefits from a declarative form and an increased level of abstraction are lost.
LINQ expression trees enable run-time analysing and dynamic building of LINQ queries [36]. This feature allows developing an optimisation method relying on rewriting of a LINQ abstract syntax tree. Automated detection of specified query patterns and transformation to an optimised form are required to make LINQ queries truly declarative. The previous part of this paper deals with the latter, i.e. the definition of efficacious rewriting rules for factoring out of a free expression. This section describes an algorithm for detection of free expressions within a query. The procedure does not address any details of implementation for the LINQ platform. It is general in terms of functional-style programming.
Let us establish a set of definitions concerning expressions and lambda abstractions (inspired by the definitions introduced by Hughes [16]):
- **Def. 1 (bound variables of lambda).** An occurrence of a variable within lambda λ\(\alpha\) is bound to λ\(\alpha\) if and only if it is a parameter of λ\(\alpha\).
- **Def. 2 (bound expressions of lambda).** An expression within lambda λ\(\alpha\) is bound to λ\(\alpha\) if and only if it contains a variable bound to λ\(\alpha\).
- **Def. 3 (native lambda of expression).** The innermost lambda in which an expression \(e\) is bound is its native lambda. Let us denote this lambda \(nλ(e)\).
- **Def. 4 (free expressions in lambda).** An expression \(e\) within lambda λ\(\alpha\) is free in lambda λ\(\alpha\) if λ\(\alpha\) is nested in native lambda of expression \(e\).
- **Def. 5 (maximal free expressions).** A maximal free expression (MFE) is a free expression of some λ\(\alpha\) that is not a proper subexpression of another free expression of λ\(\alpha\).
Additionally, to simplify definitions and the algorithm description, we assume that names of variables are unique. Moreover, we implicitly treat a whole query as a lambda abstraction with all free variables (constituting a global environment) as its parameters. In the case of examples from Listing 2 and Listing 5 native lambda of each MFE is the whole query.
From the definitions above, it follows that any MFE \(e\) free in a lambda λ\(\alpha\) can be determined before λ\(\alpha\) evaluation. Precisely, it could be determined anytime during evaluation of \(nλ(e)\). The above statement is correct since:
1. \(e\) is a free expression (see definition 5).
2. λ\(\alpha\) is inside \(nλ(e)\) (see definition 4).
3. $e$ is not bound to $\lambda_A$ (see definition 3).
4. $e$ does not contain variables bound to $\lambda_A$ (see definition 2).
5. $\lambda_A$ call does not introduce any variable (parameter) required by $e$ (see definition 1) that makes $e$ independent from $\lambda_A$.
Consequently, it is possible to factor out the expression $e$ from $\lambda_A$ and evaluate it at the level of the $n\lambda(e)$ lambda.
The algorithm uses the standard depth-first search approach and detects all MFEs during a single pass through a query expression tree. Expression visitation focuses on finding its bindings that we define as a set of lambda abstractions declaring variables (usually as lambda parameters) used in the expression. This information is further used to determine bindings of its parent. Usage of lambda abstraction parameters determines whether an expression is free or bound. Therefore, it is necessary to handle information about names of the parameters and lambda abstractions to which they are bound. This is a task of an auxiliary map called binders. To correctly manage parameters’ binding, the procedure specifically handles lambda abstractions and terminal name binding expressions.
While visiting lambda abstraction, the binders’ map is filled with its parameters. They are visible only within the lambda abstraction. This sets the right context for the recursive visitation of the lambda body in order to detect free expressions bound specifically to the current lambda. Finally, the bindings set is returned to the lambda parent except for the current lambda that is removed (information on binding to the current lambda is not relevant outside).
The binders’ map is used when visiting name-binding terminal expressions. These expressions consist only of an identifier name. If a name is found in the binders’ map, a corresponding lambda is returned (as a single-element bindings set). If a name is not bound to any lambda, then it is assumed to be a globally free variable.
The described behaviour does not concern a name on the right hand side of a member access operator (e.g., field names). Such a name is bound locally to its left side, therefore field member access bindings are inherited from their left side expression. In general, bindings for remaining types of expressions are simply inherited from their children (a sum of the sets).
In the implementation nesting level annotations for lambda abstractions and variables are introduced to simplify the binding analysis. Expression bindings provide sufficient information to determine all MFEs and their native lambdas.
To exemplify the algorithm let us consider the promoProduct problem shown in Listing 11. The query in its optimised form is presented in Listing 13. The expression determining a price $\mathsf{Math.Round}(p.\text{unitPrice} / 1.2, 2)$ is unnecessarily evaluated multiple times during execution of the inner loop implied by the Any operator. What distinguishes this and previous examples is that the transformation applies not to the whole query but only to the Where predicate. Additionally, the predicate is not a LINQ query but an expression returning a Boolean value.
Therefore, Select and First methods were used instead of SelectMany.
```
var promoProductsOptimized = products.Where(p =>
AsGroup(() => Math.Round(p.unitPrice / 1.2, 2))
.Select(priceThunk =>
products.Any(p2 => p2.unitPrice ==
priceThunk.Value)).First();
```
Partial results of the algorithm work for the unoptimised query are presented in Fig. 5. Each abstract syntax tree node of the query is annotated with three values: (1) a number indicating an order of visitation, (2) a lambda expression directly including an expression, (3) bindings set including the bounded element denoting a native lambda of an expression. Lambda expressions have been assigned unique numbers to facilitate their identification. Bindings that are removed at the end of lambda node visitation are indicated by a strikethrough symbol.
Free expressions have their native lambda (bolded lambda in bindings set) different from a nearest lambda (denoted by the second annotation), i.e. expressions with visitation order ranks 6, 11-17. After omitting terminal expressions such as literals (constant type nodes ranks 16 and 17) and name bindings (bind type nodes ranks 6 and 15), the only MFE left to factor out is $\mathsf{Math.Round}(p.\text{unitPrice} / 1.2, 2)$. Its native lambda is $\lambda_1$. Hence, factoring out should be applied to its indirect parent: the Any node with a visitation order rank 5 (presented in Listing 11). It is an expression inducing iteration over the products collection at the highest level within $\lambda_1$.
Fig 5. Example abstract syntax tree algorithm nodes annotations.
B. Applying Factoring Out
The factoring out rewriting rule can be applied during visitation of lambda expressions. However, not all MFEs should be factored out. The conditions under which the optimisation promises well are described in analogous solutions [15][8], namely: (1) a free expression cannot be too simple (e.g., names and literals), (2) a free expressions’ result should be used more than once. They can be verified during preparation to the transformation.
First, the complexity of an MFE can be examined. An appropriate threshold for applying transformation could be introduced, e.g., based on an arbitrarily set weight of language constructs comprising an MFE. Performance tests on the promoProducts problem involving factoring out a relatively simple expression have proven improvement in the case of collections consisting of at least 30 objects. For over 250 products optimised query was about twice as fast.
The second condition concerns a number of times that a MFE result is used in evaluation. An additional analysis may be necessary for confirming that nMFE(MFE) contains a method that causes iteration over some collection that may require repeated evaluation of the MFE. For example, in LINQ this concerns mainly operators parameterised with a lambda abstraction (such as Select, Where, Max, etc). Operators operating on sets (e.g., Contains, Union) or custom ones are not any indication for the optimisation. The more detailed cardinality analysis is doubtful in case of a programming language environment and a lack of a cost model.
We have implemented a prototype LINQ provider library realising the mentioned optimisation (available at https://github.com/radamus/OptimizableLINQ). The analysis and the transformation are performed using the LINQ expression trees’ representation available at runtime. Access to expression trees is provided though the IQueryable<T> interface that does not allow direct query execution. Instead, it exposes an abstract syntax tree of a query (in a form of a type-checked expression tree) to a data store provider. The provider makes use of this representation to build a query in a form (language) dedicated for a given data model (e.g., LINQ to SQL) [36].
Implementing optimisation in the form of a LINQ provider library gives a developer possibility to resign from aggressive, global query optimisation, e.g. when the order of evaluation is important considering some planned side effects. To enable automatic optimisation, the AsOptimizable extension method should be applied to a source collection. It is shown in Listing 14 for the Ikura product example.
```csharp
var ikuraQuery = products.AsOptimizable().
Where(p => products.Where(p2 => p2.productName == "Ikura").
Select(p2 => p2.unitPrice).Contains(p.unitPrice)).
Select(p => p.productName);
```
Listing 14. Example 1 – automatically optimised.
As a result, a rewritten query is compiled and becomes available for multiple use. One-time overhead occurring at the site of the definition is about a millisecond. A developer should consider runtime optimisation with caution when a query is used only once over a small collection. In contrast to LINQ, Java 8 streams operators are consumable, which prevents multiple usages of the same query. We are not aware of any mechanism enabling rewriting optimisations of Java 8 stream queries at runtime; nevertheless, in the case of consumable constructs the cost of optimisation done at runtime would burden each query execution.
VIII. SUMMARY
The proposed solution proves that it is possible to provide programming languages offering functional-style access to querying data collections with resource-independent static optimisation mechanisms. We proposed a formal method – factoring out of free expressions – based on higher-order functions rewriting. Its essence is to avoid unnecessary recurring calculations. Factoring out of a free expression that is complex to calculate generally produces a robust performance gain. Such optimisation can be fully automated and does not require any interference or implementation-specific knowledge from a programmer. Using simple examples, we emphasise the significance of the order of evaluation implied by semantics of functional-style operators. Finally, we elaborate general and safe optimisation, considering characteristics of functional-style querying in imperative programming languages.
In contrast to the Nectar system [12], which also uses term rewriting to increase sharing of computations, our work addresses functional-style queries in general, i.e. without context of application which would limit our optimisation. We take advantage of the similar approach to optimisation as Steno [21], LinqOptimizer [22], or SkyLinq [23]. However, we make an attempt to explore more aggressive, global optimisations comparable to optimisations of database query languages.
The presented approach was verified in Microsoft .NET environment and its Language-Integrated Query technology. However, the automated solution has not been straightforward to elaborate due to necessity of considering several variants implied by execution strategies of constructs comprising LINQ queries and complexity of implementing LINQ providers.
Our optimisation for LINQ can be combined automatically with other ones as long as they preserve queries in an expression trees form. In other cases, fusion of optimisations has to be done manually. For example, PLINQ enables to take advantage of multiple cores and achieve several times better efficiency in processing of large collections. Moreover, the optimiser in some cases could automatically (or by a programmer’s decision) resign from suspending evaluation of a factored out expression and remove overhead that it imposes. The tests showed that it results in further improvement of performance, up to ~18%.
Finally, it seems that transformations would be the most profitable if incorporated in a compiler. Considering source-to-source transformations already performed by the C# compiler on LINQ query expressions [33] this solution imposes itself.
We believe that our work is as a real step towards genuine declarative language-integrated queries. We conduct further
works on optimisation of functional-style constructs processing collections. One branch of our research concerns the elaboration of methods that are aware of operators semantics, e.g., addressing complex queries taking advantage of the selection operation, which exposes a huge potential for optimisation (e.g., pushing selection [37]). We also consider adapting other methods, such as revealing weak dependencies within queries that enable performing further factoring out [38].
REFERENCES
|
{"Source-Url": "https://fedcsis.org/proceedings/2015/pliks/156.pdf", "len_cl100k_base": 12346, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 42396, "total-output-tokens": 15357, "length": "2e13", "weborganizer": {"__label__adult": 0.0003352165222167969, "__label__art_design": 0.0002231597900390625, "__label__crime_law": 0.000247955322265625, "__label__education_jobs": 0.000377655029296875, "__label__entertainment": 4.667043685913086e-05, "__label__fashion_beauty": 0.00014209747314453125, "__label__finance_business": 0.00014030933380126953, "__label__food_dining": 0.00028824806213378906, "__label__games": 0.0003898143768310547, "__label__hardware": 0.0005030632019042969, "__label__health": 0.00035190582275390625, "__label__history": 0.0001678466796875, "__label__home_hobbies": 6.67572021484375e-05, "__label__industrial": 0.0002598762512207031, "__label__literature": 0.0002007484436035156, "__label__politics": 0.0002058744430541992, "__label__religion": 0.0003802776336669922, "__label__science_tech": 0.00453948974609375, "__label__social_life": 6.258487701416016e-05, "__label__software": 0.0038394927978515625, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.00026154518127441406, "__label__transportation": 0.00038504600524902344, "__label__travel": 0.0001885890960693359}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67155, 0.0201]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67155, 0.37446]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67155, 0.87079]], "google_gemma-3-12b-it_contains_pii": [[0, 4973, false], [4973, 11331, null], [11331, 17651, null], [17651, 23747, null], [23747, 29590, null], [29590, 35900, null], [35900, 41696, null], [41696, 44524, null], [44524, 50292, null], [50292, 55144, null], [55144, 61383, null], [61383, 67155, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4973, true], [4973, 11331, null], [11331, 17651, null], [17651, 23747, null], [23747, 29590, null], [29590, 35900, null], [35900, 41696, null], [41696, 44524, null], [44524, 50292, null], [50292, 55144, null], [55144, 61383, null], [61383, 67155, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67155, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67155, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67155, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67155, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67155, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67155, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67155, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67155, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67155, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67155, null]], "pdf_page_numbers": [[0, 4973, 1], [4973, 11331, 2], [11331, 17651, 3], [17651, 23747, 4], [23747, 29590, 5], [29590, 35900, 6], [35900, 41696, 7], [41696, 44524, 8], [44524, 50292, 9], [50292, 55144, 10], [55144, 61383, 11], [61383, 67155, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67155, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
64ba39256e65768fc0530ccde22bfe295e620126
|
[REMOVED]
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00191952/document", "len_cl100k_base": 14632, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 127476, "total-output-tokens": 17136, "length": "2e13", "weborganizer": {"__label__adult": 0.0003485679626464844, "__label__art_design": 0.0007472038269042969, "__label__crime_law": 0.0005016326904296875, "__label__education_jobs": 0.004486083984375, "__label__entertainment": 0.00025153160095214844, "__label__fashion_beauty": 0.00018656253814697263, "__label__finance_business": 0.0011577606201171875, "__label__food_dining": 0.0003414154052734375, "__label__games": 0.00127410888671875, "__label__hardware": 0.0016508102416992188, "__label__health": 0.0005064010620117188, "__label__history": 0.0007367134094238281, "__label__home_hobbies": 0.0002312660217285156, "__label__industrial": 0.0007472038269042969, "__label__literature": 0.0008902549743652344, "__label__politics": 0.0003407001495361328, "__label__religion": 0.0005741119384765625, "__label__science_tech": 0.438232421875, "__label__social_life": 0.00024211406707763672, "__label__software": 0.0845947265625, "__label__software_dev": 0.4609375, "__label__sports_fitness": 0.00023746490478515625, "__label__transportation": 0.0004150867462158203, "__label__travel": 0.00025153160095214844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66449, 0.02705]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66449, 0.33484]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66449, 0.91174]], "google_gemma-3-12b-it_contains_pii": [[0, 1048, false], [1048, 4140, null], [4140, 8111, null], [8111, 12350, null], [12350, 16462, null], [16462, 17779, null], [17779, 20578, null], [20578, 23404, null], [23404, 24991, null], [24991, 26964, null], [26964, 28785, null], [28785, 30662, null], [30662, 34217, null], [34217, 36229, null], [36229, 37870, null], [37870, 40591, null], [40591, 42765, null], [42765, 45501, null], [45501, 48363, null], [48363, 51441, null], [51441, 54042, null], [54042, 56147, null], [56147, 58778, null], [58778, 60589, null], [60589, 64018, null], [64018, 66449, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1048, true], [1048, 4140, null], [4140, 8111, null], [8111, 12350, null], [12350, 16462, null], [16462, 17779, null], [17779, 20578, null], [20578, 23404, null], [23404, 24991, null], [24991, 26964, null], [26964, 28785, null], [28785, 30662, null], [30662, 34217, null], [34217, 36229, null], [36229, 37870, null], [37870, 40591, null], [40591, 42765, null], [42765, 45501, null], [45501, 48363, null], [48363, 51441, null], [51441, 54042, null], [54042, 56147, null], [56147, 58778, null], [58778, 60589, null], [60589, 64018, null], [64018, 66449, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66449, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66449, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66449, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66449, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66449, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66449, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66449, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66449, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66449, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66449, null]], "pdf_page_numbers": [[0, 1048, 1], [1048, 4140, 2], [4140, 8111, 3], [8111, 12350, 4], [12350, 16462, 5], [16462, 17779, 6], [17779, 20578, 7], [20578, 23404, 8], [23404, 24991, 9], [24991, 26964, 10], [26964, 28785, 11], [28785, 30662, 12], [30662, 34217, 13], [34217, 36229, 14], [36229, 37870, 15], [37870, 40591, 16], [40591, 42765, 17], [42765, 45501, 18], [45501, 48363, 19], [48363, 51441, 20], [51441, 54042, 21], [54042, 56147, 22], [56147, 58778, 23], [58778, 60589, 24], [60589, 64018, 25], [64018, 66449, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66449, 0.03343]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
042b193ef49390a999b8ef8a9862df00b048727f
|
Introduction to Programming in R
Matthew K. Lau
Cottonwood Ecology Group
Department of Biological Sciences
Northern Arizona University
July 12, 2011
Introductions
Why learn R?
On a recent flight from Tokyo to Beijing, at around the time my lunch tray was taken away, I remembered that I needed to learn Mandarin. "...dammit," I whispered, "I knew I forgot something."
– David Sedaris (New Yorker, July 2011)
Three Reasons
Software does not *have* to limit analysis.
Three Reasons
- Software does not *have* to limit analysis.
- Scripting can save time and effort.
Three Reasons
- Software does not have to limit analysis.
- Scripting can save time and effort.
- Free and Free
Three Reasons
- Software does not have to limit analysis.
- Scripting can save time and effort.
- Free = No Cost and Free = Open Source
Three Reasons
▶ Software does not *have* to limit analysis.
▶ Scripting can save time and effort.
▶ Free = No Cost and Free = Open Source
What can R do?
- Math (e.g. linear algebra)
- Basic Statistics
- Publication quality plots
- Simulations
- Database interfacing
- GIS
- Phylogenetics
- Multivariate statistics
- Network analysis
- Bayesian statistics
- Animations
- Make julienned fries
- and much MUCH MORE...
Requested Topics
- The basics (What the heck!)
- Data input and management
- Bayesian statistics (interfacing with Win-Bugs)
- Growth curve analyses
- Summary statistics from large datasets
- GLM, GAM and Mixed Models
Lasciate ogne speranza, voi ch’intrate.
– Dante Aligheri (The Inferno)
Abandon all hope, ye who enter here.
– Dante Aligheri (The Inferno)
The Basics
Work Flow
Data Management
Analysis
Plotting
Packages
Resources
Advanced
Getting Started
- Open R.
- Say hello to the command line.
Think Intuitively
Remember, no one’s listening but R.
Math in R
What do you get when you add two and two?
Matthew K. Lau
http://dana.ucc.nau.edu/~mkl48/bio/home.html
Math in R
What do you get when you add two and two?
> 2 + 2
[1] 4
Getting Help.
- ?, ??, help
- Googling
- Manuals and Books
Matthew K. Lau
http://dana.ucc.nau.edu/~mk148/bio/home.html
Introduction to Programming in R
How do you use `?` to learn about `help`?
How do you use ? to learn about help?
> ? help
What other math commands are there?
What other math commands are there?
> ? '+'
Matthew K. Lau
http://dana.ucc.nau.edu/~mkl48/bio/home.html
Introduction to Programming in R
Objects
How do you create an object?
> \( x = 2 + 2 \)
Objects
What is the difference between `x` and `'x'`?
```r
> x
[1] 4
> "x"
[1] "x"
```
Can you do operations with objects?
Can you do operations with objects?
\[
\begin{align*}
&> x = 2 + 2 \\
&> y = 2 + 2 \\
&> x + y \\
&\quad [1] 8 \\
&> x * y \\
&\quad [1] 16 \\
&> x/y \\
&\quad [1] 1
\end{align*}
\]
Objects
What can I name objects?
- Letters
- Symbols (e.g. "." and "_")
- Numbers (following a letter)
- Keep them short (\leq 7 characters)
- Avoid function names (e.g. data, factor, sqrt)
Objects
How do I know if I have created an object already? How do you I get rid of them?
> ls()
> rm()
*NOTE: rm(ls()) will remove all objects in your workspace.
Functions
function(\texttt{arguments},...)
What do you think the functions are for the mean and standard deviation?
Apply them to our object x.
Functions - Cognates
What do you think the functions are for the mean and standard deviation?
Apply them to our object \( x \).
\[
\begin{align*}
> \text{mean}(x) \\
[1] & \quad 4 \\
> \text{mean}(%(\text{mean}(x))) \\
[1] & \quad 4 \\
> \text{sd}(x) \\
[1] & \quad \text{NA}
\end{align*}
\]
Comprehensive R Archive Network (CRAN)
- How is CRAN organized?
- CRAN.r-project.org to download R.
- Also a resource for help files, manuals and other resources.
Work Flow Outline
1. Create a project working directory.
2. Create a data folder.
3. Place data in data folder.
4. Open and save a new script in the working directory.
5. Set the working directory.
6. Call files by 'filename' or data by data/'filename'.
Matthew K. Lau
http://dana.ucc.nau.edu/~mkl48/bio/home.html
Introduction to Programming in R
1. Long tasks are awkward in the command line.
2. Command line entries are not saved.
3. Scripted code can be annotated.
Scripting
1. Open a new script window.
2. Save it to your working directory.
3. What happens when you run "# 2+2" using CTRL R?
1. Open a new script window.
2. Save it to your working directory.
3. What happens when you run
"# 2+2" using CTRL R?
> # 2+2
>
Scripting Tips
1. Start each script with meta-data.
2. Annotate each section and individual lines if possible.
3. Be descriptive but succinct, like a lab journal.
#Matthew K. Lau 10July2011
#Script from Introduction to R class at UNCW.
#How to do basic math operations.
2+2 #addition
2-2 #subtraction
2*2 #multiplication
Scripting Tips
1. Start each script with meta-data.
#Matthew K. Lau 10July2011
#Script from Introduction to R class at UNCW.
2. Annotate each section and individual lines if possible.
#How to do basic math operations.
2+2 #addition
2−2 #subtraction
2*2 #multiplication
3. Be descriptive but succinct, like a lab journal.
*You’ll thank yourself later when you easily remember what you were doing when you wrote your script.
Matthew K. Lau
http://dana.ucc.nau.edu/~mk148/bio/home.html
Introduction to Programming in R
Importing
- Entering by hand (:, c)
- Importing from a file (read.csv)
How do you create a vector of integers from 1:5?
```r
> 1:5
[1] 1 2 3 4 5
```
Entering data by hand (:, c)
How do you create a vector of integers from 1:5?
```r
> c(1, 2, 3, 4, 5)
[1] 1 2 3 4 5
```
How do you create a vector of integers from 1:5?
(Hint: Create an object)
How do you create a vector of integers from 1:5? (Hint: Create an object)
NOTE: Our object x was an integer, but now it’s a vector.
```r
> x = 1:5
> x
[1] 1 2 3 4 5
```
Data Types
- (Scalars)
- Vectors (numeric, character)
- Matrices and Arrays (matrix, array)
- Data Frames (data.frame)
- Lists (list)
- ...
Matthew K. Lau
http://dana.ucc.nau.edu/~mkl48/bio/home.html
Introduction to Programming in R
Vectors (numeric, character)
How do you know what type of data you have?
```r
> class(x)
[1] "integer"
> mode(x)
[1] "numeric"
> y = c(1, 2, 2.5, 3, 5)
> class(y)
[1] "numeric"
> mode(y)
[1] "numeric"
```
Vectors (numeric, character)
How do change data types?
```r
> x = as.character(x)
> class(x)
[1] "character"
> mode(x)
[1] "character"
```
Matrices and Arrays
How do you create a matrix?
> M = matrix(data = 1:9, nrow = 3, ncol = 3)
> M
[,1] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
[3,] 3 6 9
Matrices and Arrays
How do you create an array?
Matrices and Arrays
How do you create an array?
```r
> A = array(1:9, dim = c(3, 3))
> A
[,1] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
[3,] 3 6 9
> class(A)
[1] "matrix"
> mode(A)
[1] "numeric"
```
### Matrices and Arrays
What about connecting two vectors together into columns?
```r
> x = 1:5
> y = 1:5
> cbind(x, y)
```
<table>
<thead>
<tr>
<th></th>
<th>x</th>
<th>y</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<td>4</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<td>5</td>
<td>5</td>
<td>5</td>
</tr>
</tbody>
</table>
Matthew K. Lau
http://dana.ucc.nau.edu/~mkl48/bio/home.html
What about connecting two vectors together into rows?
```r
> x = 1:5
> y = 1:5
> rbind(x, y)
x 1 2 3 4 5
y 1 2 3 4 5
```
Data Frames
How do you create a data frame from numeric and character vectors?
Data Frames
How do you create a data frame with a numeric and a character vector?
```r
> x = 1:3
> y = c("a", "b", "c")
> data.frame(x, y)
x y
1 1 a
2 2 b
3 3 c
```
Lists
How do you create a list with m and a?
How do you create a list with m and a?
> list(M, A)
[[1]]
[,] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
[3,] 3 6 9
[[2]]
[,] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
[3,] 3 6 9
Importing a Matrix
How do you import a matrix?
Data can be found at http://perceval.bio.nau.edu/downloads/igert/IntroR-Course_Notes/CommData.csv
> com = read.csv("data/CommData.csv")
Quick Summaries of Data
```r
> head(com)
1 1 321 179 179 143 36 179 179 179 250 179 0 0 71 107 36 0 0 143
2 1 250 143 36 214 214 214 250 214 0 107 0 36 107 36 0 71 36 0
3 1 71 250 107 179 143 107 179 250 143 179 0 0 36 36 107 36 36 36
4 1 36 250 71 107 71 179 143 250 36 357 0 71 0 36 36 36 36 36
5 1 179 107 143 214 0 143 107 179 143 250 36 36 36 0 0 36 36 36
6 1 357 107 143 286 179 179 179 179 250 286 0 36 143
1 0 71 0 71 36 0 36 36 0 0 107 36
2 36 0 36 36 36 71 71 36 0 143 71 36
3 36 36 36 71 36 143 36 0 36 71 0 0
4 107 71 36 36 0 36 36 36 0 0 36 36
5 36 71 0 36 36 36 0 71 36 71 36 143
```
### Quick Summaries of Data
```r
> summary(com)
```
<table>
<thead>
<tr>
<th></th>
<th>env</th>
<th>V1</th>
<th>V2</th>
<th>V3</th>
</tr>
</thead>
<tbody>
<tr>
<td>Min.</td>
<td>1.0</td>
<td>0.0</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<td>1st Qu.</td>
<td>1.0</td>
<td>0.0</td>
<td>36.0</td>
<td>36.0</td>
</tr>
<tr>
<td>Median</td>
<td>1.5</td>
<td>71.0</td>
<td>107.0</td>
<td>89.0</td>
</tr>
<tr>
<td>Mean</td>
<td>1.5</td>
<td>107.1</td>
<td>109.0</td>
<td>92.9</td>
</tr>
<tr>
<td>3rd Qu.</td>
<td>2.0</td>
<td>187.8</td>
<td>152.0</td>
<td>152.0</td>
</tr>
<tr>
<td>Max.</td>
<td>2.0</td>
<td>357.0</td>
<td>250.0</td>
<td>250.0</td>
</tr>
</tbody>
</table>
| | V4 | V5 | V6 | V7 | V8 |
|-------|------------|------------|------------|------------|
| Min. | 0.0 | 0.00 | 0.0 | 0.0 | 0.0 |
| 1st Qu.| 36.0 | 36.00 | 36.00 | 36.00 |
| Median | 107.0 | 107.00 | 107.00 | 107.00 |
| Mean | 96.45 | 98.35 | 103.8 | 110.7 |
| 3rd Qu.| 143.00 | 152.00 | 152.00 | 152.00 |
| Max. | 286.0 | 250.0 | 286.0 | 321.0 | 321.0 |
Matthew K. Lau
http://dana.ucc.nau.edu/~mkl48/bio/home.html
Introduction to Programming in R
Manipulating
How do you pull out a single value from a vector?
```r
> x = 1:5
> x[3]
[1] 3
```
Manipulating
How do you isolate multiple values?
```r
> x = 1:5
> c(1, 2, 3)
[1] 1 2 3
> x[c(1, 2, 3)]
[1] 1 2 3
```
How do you isolate multiple values? Is there a simpler way?
Manipulating
How do you isolate multiple values? Is there a simpler way?
```r
> x = 1:5
> 1:3
[1] 1 2 3
> x[1:3]
[1] 1 2 3
```
Manipulating
How do you subset a vector?
> x = 1:5
> c(FALSE, FALSE, FALSE, TRUE, FALSE)
[1] FALSE FALSE FALSE TRUE FALSE
> x[c(FALSE, FALSE, FALSE, TRUE, FALSE)]
[1] 4
Manipulating
How do you subset a vector?
> x == 4
[1] FALSE FALSE FALSE TRUE FALSE
> x[x == 4]
[1] 4
> x != 4
[1] TRUE TRUE TRUE FALSE TRUE FALSE TRUE
Manipulating
How do you isolate all of the values of $x$ greater than or equal to 2?
Manipulating
How do you isolate all of the values of \( x \) greater than or equal to 2?
\[
> x[x >= 2]
\]
\[
[1] 2 3 4 5
\]
Manipulating
How do you isolate one number from a matrix?
\[
\begin{array}{ccc}
[1,] & 1 & 4 & 7 \\
[2,] & 2 & 5 & 8 \\
[3,] & 3 & 6 & 9
\end{array}
\]
> M[2, 3]
[1] 8
### Manipulating
How do you isolate a whole row or column from a matrix?
<p>| | | | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>[,1]</td>
<td>[,2]</td>
<td>[,3]</td>
<td></td>
</tr>
<tr>
<td>[1,]</td>
<td>1</td>
<td>4</td>
<td>7</td>
</tr>
<tr>
<td>[2,]</td>
<td>2</td>
<td>5</td>
<td>8</td>
</tr>
<tr>
<td>[3,]</td>
<td>3</td>
<td>6</td>
<td>9</td>
</tr>
</tbody>
</table>
```r
> M[1, ]
[1] 1 4 7
```
```r
> M[, 1]
[1] 1 2 3
```
Manipulating
How can you get all of the values of \( M[1,] \) that are greater than 2?
Manipulating
How can you get all of the values of $M[1,]$ that are greater than 2?
```
[,1] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
[3,] 3 6 9
> M[1, ]
[1] 1 4 7
> M[1, ] > 2
[1] FALSE TRUE TRUE
> M[1, M[1, ] > 2]
[1] 4 7
```
Manipulating
How do you sort a matrix by the first column?
```r
> order(M[, 1])
[1] 1 2 3
> order(M[, 1], decreasing = TRUE)
[1] 3 2 1
```
Manipulating
How do you sort a matrix by the first column?
```r
> M[order(M[, 1]), ]
[,1] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
[3,] 3 6 9
> M[order(M[, 1], decreasing = TRUE), ]
[,1] [,2] [,3]
[1,] 3 6 9
[2,] 2 5 8
[3,] 1 4 7
```
Manipulating
How do you sort a matrix by the first row?
Manipulating
How do you sort a matrix by the first row?
```r
> M[1, ]
[1] 1 4 7
> order(M[1, ])
[1] 1 2 3
> M[, order(M[1, ])]
[,1] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
[3,] 3 6 9
```
How would you sort our matrix (com) by the column names?
Manipulating
> colnames(com)
[1] "env" "V1" "V2" "V3" "V4" "V5" "V6" "V7" "V8"
[13] "V12" "V13" "V14" "V15" "V16" "V17" "V18" "V19" "V20"
[25] "V24" "V25" "V26" "V27" "V28" "V29" "V30"
How would you sort our matrix \( \text{com} \) by the column names?
\[
> \text{com}[, \text{order} \left( \text{colnames} \left( \text{com} \right) \right)]
\]
Manipulating
Can I refer to the columns by name?
```r
> colnames(com)[1:3]
[1] "env" "V1" "V2"
> attach(com)
> com[order(V1), ]
> detach(com)
> com$V1
> com$V2
```
Watch Out!
1. x and 'x'
2. x == x and x=x
3. T and F are TRUE and FALSE by default, unless you change them.
Exporting
1. Pick a file format (I recommend .csv).
2. Decide on whether you want to include row names and column names.
3. Write your object using `write.csv`, customizing output using the `row.names=FALSE` argument to exclude row names.
```r
> write.csv(com,'data/mymatrix',row.names=FALSE)
```
INTERMISSION
How do you calculate summary statistics from vectors?
```r
> x = 1:3
[1] 1 2 3
> y = 2:4
[1] 2 3 4
> length(x)
[1] 3
> mean(x)
[1] 2
> sqrt(x)
[1] 1.000000 1.414214 1.732051
```
Summary Statistics
How do you calculate summary statistics from vectors?
\[
\begin{align*}
&> x = 1:3 \\
&> y = 2:4 \\
&> sd(x) \\
&[1] 1 \\
&> var(x) \\
&[1] 1 \\
&> cor(x, y) \\
&[1] 1
\end{align*}
\]
Summary Statistics
How do you calculate summary statistics from matrices, such as the mean for all observations (i.e. for each row)?
```r
> M
[,1] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
[3,] 3 6 9
> apply(M, MARGIN = 1, FUN = mean)
[1] 4 5 6
```
Summary Statistics
How about for all columns?
> M
[,1] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
[3,] 3 6 9
> apply(M, MARGIN = 2, FUN = mean)
[1] 2 5 8
Summary Statistics
What if I want the mean for a given variable \( y \) for each level of a factor \( x \)?
\[
\begin{align*}
&> y = \text{com}\$V1 \\
&> x = \text{com}\$env \\
&> ext{tapply}(y, INDEX = x, fun = \text{mean})
\end{align*}
\]
\[ [1] \hspace{1em} 1 \hspace{0.5em} 1 \hspace{0.5em} 1 \hspace{0.5em} 1 \hspace{0.5em} 1 \hspace{0.5em} 1 \hspace{0.5em} 1 \hspace{0.5em} 1 \hspace{0.5em} 1 \hspace{0.5em} 1 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \hspace{0.5em} 2 \]
What about getting the standard deviation for $y$ given the levels of $x$?
```r
> y = com$V1
> x = com$env
```
What about getting the standard deviation for \( y \) given the levels of \( x \)?
\[
\begin{align*}
> & \ y = \text{com}$V1 \\
> & \ x = \text{com}$env \\
> & \ \text{tapply}(y, \ INDEX = x, \ fun = \text{sd}) \\
\end{align*}
\]
\[
[1] 1 1 1 1 1 1 1 1 1 1 2 2 2
\]
Summary Statistics
How about getting the sum of all of the values in the matrices in a list?
> AM.list = list(A, M)
Summary Statistics
How about getting the sum of all of the values in the matrices in a list?
```r
> AM.list = list(A, M)
> lapply(AM.list, FUN = sum)
[[1]]
[1] 45
[[2]]
[1] 45
```
t-test
How do you conduct one- and two-sample t-tests?
```r
> x = com$V1
> y = com$V2
> t.test(x)
```
t-test (one-sample)
> t.test(x)
One Sample t-test
data: x
t = 4.1358, df = 19, p-value = 0.0005619
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
52.89918 161.30082
sample estimates:
mean of x
107.1
t-test (one-sample)
> t.test(x, alternative = "two.sided")
One Sample t-test
data: x
t = 4.1358, df = 19, p-value = 0.0005619
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
52.89918 161.30082
sample estimates:
mean of x
107.1
t-test (one-sample)
> t.test(x, alternative = "greater")
One Sample t-test
data: x
t = 4.1358, df = 19, p-value = 0.0002810
alternative hypothesis: true mean is greater than 0
95 percent confidence interval: 62.32249 Inf
sample estimates:
mean of x
107.1
t-test (one-sample)
> t.test(x, alternative = "less")
One Sample t-test
data: x
t = 4.1358, df = 19, p-value = 0.9997
alternative hypothesis: true mean is less than 0
95 percent confidence interval:
-Inf 151.8775
sample estimates:
mean of x
107.1
t-test (two-sample)
```r
> t.test(x, y, alternative = "less")
Welch Two Sample t-test
data: x and y
t = -0.0588, df = 33.738, p-value = 0.4767
alternative hypothesis: true difference in means is less than 0
95 percent confidence interval:
-Inf 51.35174
sample estimates:
mean of x mean of y
107.10 108.95
```
Regression
How do you conduct a regression?
\[
\begin{align*}
> & \quad x = \text{com}\$V1 \\
> & \quad y = \text{com}\$V2 \\
> & \quad \text{xy.fit = lm}(y \sim x) \\
> & \quad \text{summary(xy.fit)}
\end{align*}
\]
Regression
```r
> xy.fit = lm(y ~ x)
> summary(xy.fit)
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-100.81 -51.17 -13.80 25.17 157.08
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 84.8026 23.9013 3.548 0.0023 **
x 0.2255 0.1536 1.468 0.1594
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 77.54 on 18 degrees of freedom
Multiple R-squared: 0.1069, Adjusted R-squared: 0.05728
F-statistic: 2.155 on 1 and 18 DF, p-value: 0.1594
```
Matthew K. Lau
http://dana.ucc.nau.edu/~mkl48/bio/home.html
Introduction to Programming in R
How do you conduct an ANOVA?
```r
> x = factor(com$env)
> y = com$V2
> anova.fit = aov(y ~ x)
> summary(anova.fit)
```
> anova.fit = aov(y ~ x)
> summary(anova.fit)
Df Sum Sq Mean Sq F value Pr(>F)
---
x 1 69502 69502 24.20887 0.0001104 ***
Residuals 18 51679 2871
---
Signif. codes: 0 ^ a˘A¨Y***^ a˘A´Z 0.001 ^ a˘A¨Y**^ a˘A´Z 0.01 ^ a˘A¨Y.^ a˘A´Z 0.05 ^ a˘A¨Y ^ a˘A´Z 1
ANOVA
> unlist(anova.fit)
$\text{coefficients.(Intercept)}$
[1] 167.9
$\text{coefficients.x2}$
[1] -117.9
$residuals.1$
[1] 11.1
$residuals.2$
[1] -24.9
$residuals.3$
> names(anova.fit)
[1] "coefficients" "residuals" "effects" "rank"
[5] "fitted.values" "assign" "qr" "df.residual"
[9] "contrasts" "xlevels" "call" "terms"
[13] "model"
> anova.fit$residuals
<table>
<thead>
<tr>
<th></th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>10</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>11.1</td>
<td>-24.9</td>
<td>82.1</td>
<td>82.1</td>
<td>-60.9</td>
<td>-60.9</td>
<td>-24.9</td>
<td>82.1</td>
<td>-96.9</td>
<td>11.1</td>
</tr>
<tr>
<td></td>
<td>14</td>
<td>15</td>
<td>16</td>
<td>17</td>
<td>18</td>
<td>19</td>
<td>20</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>-50.0</td>
<td>57.0</td>
<td>-50.0</td>
<td>-14.0</td>
<td>-14.0</td>
<td>57.0</td>
<td>21.0</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
How would you plot a histogram of the residuals?
```r
> anova.res = anova.fit$residuals
> hist(anova.res)
```
How would you make an x-y scatterplot?
> x = com$V1
> y = com$V2
> plot(x, y)
> `plot(x, y)`
Barplots
How would you make a barplot of the means of V1 at each level of env in the com dataset?
How would you make a barplot of the means of V1 at each level of env in the com dataset?
```r
> x = com$env
> y = com$V1
> mu = tapply(y, INDEX = x, fun = mean)
> barplot(mu)
```
Barplots
Customizing Plots
How do you change the axis names?
```r
> x = com$V1
> y = com$V2
> plot(x, y)
> plot(x, y, xlab = "Variable 1", ylab = "Variable 2")
```
Customizing Plots
> plot(x, y, xlab = "Variable 1", ylab = "Variable 2")
Customizing Plots
How do you add a regression line?
```r
> x = com$V1
> y = com$V2
> plot(x, y, xlab = "Variable 1", ylab = "Variable 2")
```
Customizing Plots
> plot(x, y, xlab = "Variable 1", ylab = "Variable 2")
> abline(lm(y ~ x))
Multi-plot Windows
1. Setup the plot window using the `par` function.
2. Change `par(mfrow=c(1,2))`.
3. Build each plot in succession.
Locator - In case you get lost in a plot...
1. Build your plot.
2. Run `locator(1)` in the command line.
3. Click on the plot, R will output the coordinates.
Packages: Installation and Use
1. Find the name of the package you want.
2. Make sure you’re connected to the internet.
3. Use the command, `install.packages('package name')`
4. Use the command, `library('package name')`
5. If there are conflicting functions in two packages, detach the one you’re not using with the command, `detach(package:'package name')`
Ummm, everyone and their mother uses Excel, can R?
> install.packages('gdata')
> library("gdata")
> read.xls("data/CommData.xls")
**R Commander**
1. R’s version of *JMP* (designed by John Fox)
2. Fully integrated script editor.
3. `install.packages('Rcmdr')`
4. `library('Rcmdr')`
5. Documentation can be found here:
http://socserv.mcmaster.ca/jfox/Misc/Rcmdr/
R Commander
Matthew K. Lau
http://dana.ucc.nau.edu/~mkl48/bio/home.html
Introduction to Programming in R
Resources
- **Cheat Sheet**: http://cran.r-project.org/doc/contrib/Short-refcard.pdf
- **Scripting**: http://google-styleguide.googlecode.com/svn/trunk/google-r-style.html
- **SimpleR**: http://www.calvin.edu/~stob/courses/m241/S11/Verzani-SimpleR.pdf
- **Quick-R**: http://www.statmethods.net/index.html
- **Plotting**: http://www.stat.auckland.ac.nz/~paul/RGraphics/rgraphics.html
- **Regression and ANOVA**: http://cran.r-project.org/doc/contrib/Faraway-PRA.pdf
- **Ecological Analyses**: http://ecology.msu.montana.edu/labdsv/R/
http://www.mpcer.nau.edu/igert/eco_analysis_r.html
- **Network Analyses**: http://erzuli.ss.uci.edu/R.stuff/
Writing Functions
1. Create functions as you would objects using `function`.
2. The fundamental design is:
```r
my.func=function(x,...)’insert things you want done’
```
3. Arguments and objects created within the function are not passed out of the function.
Looping
1. Used to repeat a task (can be very powerful, but computationally inefficient).
2. Takes arguments that determine the number of repetitions.
3. Uses the for or while commands.
|
{"Source-Url": "http://people.uncw.edu/borretts/courses/RworkshopUNCW.pdf", "len_cl100k_base": 8700, "olmocr-version": "0.1.50", "pdf-total-pages": 124, "total-fallback-pages": 0, "total-input-tokens": 168484, "total-output-tokens": 13619, "length": "2e13", "weborganizer": {"__label__adult": 0.0004172325134277344, "__label__art_design": 0.0011701583862304688, "__label__crime_law": 0.0004529953002929687, "__label__education_jobs": 0.024505615234375, "__label__entertainment": 0.0001938343048095703, "__label__fashion_beauty": 0.00023376941680908203, "__label__finance_business": 0.0007367134094238281, "__label__food_dining": 0.0005745887756347656, "__label__games": 0.0008826255798339844, "__label__hardware": 0.001155853271484375, "__label__health": 0.0006966590881347656, "__label__history": 0.0006303787231445312, "__label__home_hobbies": 0.0005483627319335938, "__label__industrial": 0.0007658004760742188, "__label__literature": 0.0005617141723632812, "__label__politics": 0.00034427642822265625, "__label__religion": 0.0005984306335449219, "__label__science_tech": 0.11004638671875, "__label__social_life": 0.0005350112915039062, "__label__software": 0.074462890625, "__label__software_dev": 0.779296875, "__label__sports_fitness": 0.0004467964172363281, "__label__transportation": 0.00048232078552246094, "__label__travel": 0.0004322528839111328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22090, 0.09106]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22090, 0.2649]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22090, 0.64391]], "google_gemma-3-12b-it_contains_pii": [[0, 152, false], [152, 166, null], [166, 179, null], [179, 411, null], [411, 470, null], [470, 569, null], [569, 682, null], [682, 819, null], [819, 958, null], [958, 1237, null], [1237, 1456, null], [1456, 1527, null], [1527, 1595, null], [1595, 1685, null], [1685, 1745, null], [1745, 1800, null], [1800, 1915, null], [1915, 1984, null], [1984, 2140, null], [2140, 2182, null], [2182, 2230, null], [2230, 2266, null], [2266, 2407, null], [2407, 2466, null], [2466, 2555, null], [2555, 2591, null], [2591, 2774, null], [2774, 2966, null], [2966, 3131, null], [3131, 3175, null], [3175, 3276, null], [3276, 3570, null], [3570, 3734, null], [3734, 4084, null], [4084, 4205, null], [4205, 4334, null], [4334, 4464, null], [4464, 4788, null], [4788, 5329, null], [5329, 5401, null], [5401, 5480, null], [5480, 5602, null], [5602, 5676, null], [5676, 5847, null], [5847, 6083, null], [6083, 6290, null], [6290, 6431, null], [6431, 6588, null], [6588, 6637, null], [6637, 6831, null], [6831, 7125, null], [7125, 7281, null], [7281, 7361, null], [7361, 7532, null], [7532, 7578, null], [7578, 7739, null], [7739, 7924, null], [7924, 8698, null], [8698, 9874, null], [9874, 9971, null], [9971, 10090, null], [10090, 10150, null], [10150, 10279, null], [10279, 10453, null], [10453, 10611, null], [10611, 10697, null], [10697, 10825, null], [10825, 10997, null], [10997, 11257, null], [11257, 11345, null], [11345, 11568, null], [11568, 11709, null], [11709, 11937, null], [11937, 11994, null], [11994, 12181, null], [12181, 12238, null], [12238, 12425, null], [12425, 12586, null], [12586, 12757, null], [12757, 12866, null], [12866, 13165, null], [13165, 13178, null], [13178, 13358, null], [13358, 13563, null], [13563, 13813, null], [13813, 13967, null], [13967, 14679, null], [14679, 14791, null], [14791, 15059, null], [15059, 15177, null], [15177, 15361, null], [15361, 15465, null], [15465, 15716, null], [15716, 15993, null], [15993, 16252, null], [16252, 16510, null], [16510, 16832, null], [16832, 17051, null], [17051, 17719, null], [17719, 17839, null], [17839, 18121, null], [18121, 18294, null], [18294, 18499, null], [18499, 18897, null], [18897, 19008, null], [19008, 19087, null], [19087, 19102, null], [19102, 19201, null], [19201, 19381, null], [19381, 19390, null], [19390, 19547, null], [19547, 19621, null], [19621, 19765, null], [19765, 19859, null], [19859, 19995, null], [19995, 20154, null], [20154, 20516, null], [20516, 20647, null], [20647, 20884, null], [20884, 20991, null], [20991, 21636, null], [21636, 21636, null], [21636, 21904, null], [21904, 22090, null]], "google_gemma-3-12b-it_is_public_document": [[0, 152, true], [152, 166, null], [166, 179, null], [179, 411, null], [411, 470, null], [470, 569, null], [569, 682, null], [682, 819, null], [819, 958, null], [958, 1237, null], [1237, 1456, null], [1456, 1527, null], [1527, 1595, null], [1595, 1685, null], [1685, 1745, null], [1745, 1800, null], [1800, 1915, null], [1915, 1984, null], [1984, 2140, null], [2140, 2182, null], [2182, 2230, null], [2230, 2266, null], [2266, 2407, null], [2407, 2466, null], [2466, 2555, null], [2555, 2591, null], [2591, 2774, null], [2774, 2966, null], [2966, 3131, null], [3131, 3175, null], [3175, 3276, null], [3276, 3570, null], [3570, 3734, null], [3734, 4084, null], [4084, 4205, null], [4205, 4334, null], [4334, 4464, null], [4464, 4788, null], [4788, 5329, null], [5329, 5401, null], [5401, 5480, null], [5480, 5602, null], [5602, 5676, null], [5676, 5847, null], [5847, 6083, null], [6083, 6290, null], [6290, 6431, null], [6431, 6588, null], [6588, 6637, null], [6637, 6831, null], [6831, 7125, null], [7125, 7281, null], [7281, 7361, null], [7361, 7532, null], [7532, 7578, null], [7578, 7739, null], [7739, 7924, null], [7924, 8698, null], [8698, 9874, null], [9874, 9971, null], [9971, 10090, null], [10090, 10150, null], [10150, 10279, null], [10279, 10453, null], [10453, 10611, null], [10611, 10697, null], [10697, 10825, null], [10825, 10997, null], [10997, 11257, null], [11257, 11345, null], [11345, 11568, null], [11568, 11709, null], [11709, 11937, null], [11937, 11994, null], [11994, 12181, null], [12181, 12238, null], [12238, 12425, null], [12425, 12586, null], [12586, 12757, null], [12757, 12866, null], [12866, 13165, null], [13165, 13178, null], [13178, 13358, null], [13358, 13563, null], [13563, 13813, null], [13813, 13967, null], [13967, 14679, null], [14679, 14791, null], [14791, 15059, null], [15059, 15177, null], [15177, 15361, null], [15361, 15465, null], [15465, 15716, null], [15716, 15993, null], [15993, 16252, null], [16252, 16510, null], [16510, 16832, null], [16832, 17051, null], [17051, 17719, null], [17719, 17839, null], [17839, 18121, null], [18121, 18294, null], [18294, 18499, null], [18499, 18897, null], [18897, 19008, null], [19008, 19087, null], [19087, 19102, null], [19102, 19201, null], [19201, 19381, null], [19381, 19390, null], [19390, 19547, null], [19547, 19621, null], [19621, 19765, null], [19765, 19859, null], [19859, 19995, null], [19995, 20154, null], [20154, 20516, null], [20516, 20647, null], [20647, 20884, null], [20884, 20991, null], [20991, 21636, null], [21636, 21636, null], [21636, 21904, null], [21904, 22090, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 22090, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 22090, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22090, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22090, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22090, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22090, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22090, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22090, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22090, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22090, null]], "pdf_page_numbers": [[0, 152, 1], [152, 166, 2], [166, 179, 3], [179, 411, 4], [411, 470, 5], [470, 569, 6], [569, 682, 7], [682, 819, 8], [819, 958, 9], [958, 1237, 10], [1237, 1456, 11], [1456, 1527, 12], [1527, 1595, 13], [1595, 1685, 14], [1685, 1745, 15], [1745, 1800, 16], [1800, 1915, 17], [1915, 1984, 18], [1984, 2140, 19], [2140, 2182, 20], [2182, 2230, 21], [2230, 2266, 22], [2266, 2407, 23], [2407, 2466, 24], [2466, 2555, 25], [2555, 2591, 26], [2591, 2774, 27], [2774, 2966, 28], [2966, 3131, 29], [3131, 3175, 30], [3175, 3276, 31], [3276, 3570, 32], [3570, 3734, 33], [3734, 4084, 34], [4084, 4205, 35], [4205, 4334, 36], [4334, 4464, 37], [4464, 4788, 38], [4788, 5329, 39], [5329, 5401, 40], [5401, 5480, 41], [5480, 5602, 42], [5602, 5676, 43], [5676, 5847, 44], [5847, 6083, 45], [6083, 6290, 46], [6290, 6431, 47], [6431, 6588, 48], [6588, 6637, 49], [6637, 6831, 50], [6831, 7125, 51], [7125, 7281, 52], [7281, 7361, 53], [7361, 7532, 54], [7532, 7578, 55], [7578, 7739, 56], [7739, 7924, 57], [7924, 8698, 58], [8698, 9874, 59], [9874, 9971, 60], [9971, 10090, 61], [10090, 10150, 62], [10150, 10279, 63], [10279, 10453, 64], [10453, 10611, 65], [10611, 10697, 66], [10697, 10825, 67], [10825, 10997, 68], [10997, 11257, 69], [11257, 11345, 70], [11345, 11568, 71], [11568, 11709, 72], [11709, 11937, 73], [11937, 11994, 74], [11994, 12181, 75], [12181, 12238, 76], [12238, 12425, 77], [12425, 12586, 78], [12586, 12757, 79], [12757, 12866, 80], [12866, 13165, 81], [13165, 13178, 82], [13178, 13358, 83], [13358, 13563, 84], [13563, 13813, 85], [13813, 13967, 86], [13967, 14679, 87], [14679, 14791, 88], [14791, 15059, 89], [15059, 15177, 90], [15177, 15361, 91], [15361, 15465, 92], [15465, 15716, 93], [15716, 15993, 94], [15993, 16252, 95], [16252, 16510, 96], [16510, 16832, 97], [16832, 17051, 98], [17051, 17719, 99], [17719, 17839, 100], [17839, 18121, 101], [18121, 18294, 102], [18294, 18499, 103], [18499, 18897, 104], [18897, 19008, 105], [19008, 19087, 106], [19087, 19102, 107], [19102, 19201, 108], [19201, 19381, 109], [19381, 19390, 110], [19390, 19547, 111], [19547, 19621, 112], [19621, 19765, 113], [19765, 19859, 114], [19859, 19995, 115], [19995, 20154, 116], [20154, 20516, 117], [20516, 20647, 118], [20647, 20884, 119], [20884, 20991, 120], [20991, 21636, 121], [21636, 21636, 122], [21636, 21904, 123], [21904, 22090, 124]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22090, 0.04043]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
31f549772576427afb97a592469a8f59f79bb077
|
Distributed multi-agent communication system based on dynamic ontology mapping
Sally M. El-Ghamrawy* and Ali I. El-Desouky
Department of Computers and Systems,
Faculty of Engineering,
Mansoura University,
Mansoura, Egypt
E-mail: Sally@mans.edu.eg
E-mail: ali_eldesouky@yahoo.com
*Corresponding author
Abstract: Communication is the most important feature for meaningful interaction among agents in distributed multi-agent systems. Communication enables agent’s interaction to achieve their goals. Agent communication languages provide a standard in the protocol and language used in the communication, but cannot provide a standard in ontology, because ontology depends on the subject and concept of the communication. This lack of standardisation is known as interoperability problem. In order to obtain semantic interoperability, agents need to agree on the basis of different ontologies. In this paper, a communication layer is proposed to outline the communication between agents, multiplatform communication system (MPCS) architecture is proposed to provide a highly flexible and scalable system. In addition a dynamic ontology mapping system for agent communication (DOMAC) is proposed based on different mapping approaches.
Keywords: agent communication language; ACL; interoperability; knowledge query and manipulation language; KQML; multi-agent system; MAS; ontology mapping; distributed intelligent system.
Biographical notes: Sally M. El-Ghamrawy is an Assistant Teacher in MET Academy in Egypt. She received her MA in Computer Science from the Computers Engineering and Systems Department in 2006 and is currently a PhD candidate in the Department Computers Engineering and Systems at the Faculty of Engineering, Mansoura University. She has taught and gave a practical training in the grants from the Ministry of Communications and Information Technology in collaboration with IBM. She received a certificate A+ International Inc. CompTIA. She is also an IEEE Member.
Ali I. El-Desouky is a Full Professor for Computers Engineering and Systems Department at the Faculty of Engineering, Mansoura University in Egypt. He is also a Visiting Part-time Professor for MET Academy. He obtained his MA and PhD from the University of Glasgow in, the USA. He teaches in American and Mansoura universities. He also took over many positions of leadership and supervision of many scientific papers. He has published more than 150 articles in well-known international journals.
1 Introduction
The proposed distributed multi-agent system (DMAS) framework (El-Desouky and El-Ghamrawy, 2008) provides the basis for an open environment where agents interact with each other to reach their individual or shared goals in evolving environment. To interact in such environment, agents need to overcome many challenges. One of the most important challenges that the agents must overcome is how they must be able to communicate with each other. So the development in agent communication module must be considered in designing the DMAIS in order to give agents the ability to have successful cooperation, negotiation, and scheduling between each other. In other words, the communication is the kernel of any MAS; without communication there would not be any interaction between agents. Agent framework is a set of programming tools for constructing agents and its infrastructure provides regulations that agents follow to communicate and to understand each other, thereby, enabling knowledge sharing. Agent infrastructures mostly deal with the communication among agents based on a communication language using common ontological system. Communication is the most important feature for meaningful interaction among agents in MAS; as it enables agents to interact and share information to perform tasks to achieve their goals. In order to achieve this objective, agent communication languages (ACL) has been proposed based on the speech-act theory. Speech act theory is derived from the linguistic analysis of human communication. It is based on the idea that with language the speaker not only makes statements, but also performs actions (Farooq, 2002). ACL provides a standard in the protocol and language used in the communication, but cannot provide a standard in ontology, because ontology depends on the subject and concept of the communication, and it is almost impossible for two agents can share a same semantic vocabulary, they usually have a heterogeneous private vocabulary defined in their own private ontology. The development of generally accepted standards will take a long time (Wang and Hongshuai, 2010). This lack of standardisation is known as interoperability problem. In order to obtain semantic interoperability in DMAIS, agents need to agree on the basis of different ontologies.
In this paper, our main concern is to develop the communication module that helps in the improvement of DMAIS performance. In brief, the organisation of the paper is as follows: in Section 2, the related work of communication in MAS is reviewed, and then the definition of ontology is introduced, also an outline of the researches uses ontology is presented. The concept of ontology mapping is discussed, showing the different approaches proposed to solve the ontology mapping problem and some comprehensive surveys of some famous ontology mapping systems were introduced too, finally an example of ontology mapping between agents in MAS is illustrated. In Section 3, a communication layers is proposed to outline the communication between agents. In Section 4, multiplatform communication system (MPCS) architecture is proposed to provide a highly flexible and scalable system that allows agents written in different languages to send and receive messages using the KQML standard, as well as it allow agents to maintain several dialogues (Ds) at a time. In Section 5, a dynamic ontology mapping system for agent communication (DOMAC) is proposed based on different mapping approaches, in order to provide help in the conversation among different agents. Finally, Section 6 shows the experimental evaluation and the results obtained after implementing the proposed systems. Section 7 summarises major contribution of the paper and proposes the topics for future research.
2 Related work for communication in MAS
The communication in MAS has been subject of interests for many researches (Sycara et al., 1996; Suarez-Romero et al., 2005; Mellah, 2007), because communication is one of the most important issues in MAS design. The communication module in any MAS is responsible of how the agent communicates with other agents using an ACL. There are a few common ACLs such as the knowledge query and manipulation language (KQML) (Finin and et al., 1994) and the Foundation for Intelligent Physical Agent’s communication language (FIPA’s ACL) (Labrou et al., 1999). The exchanging of data between agents is vitally important to the efficiency of MAS. Communication is required to ensure cooperation between agents. Each agent’s actions depend critically on knowledge that is accessible only from another agent. Researchers investigating ACLs mention three key elements to achieve multi-agent interaction (Labrou et al., 1999):
1. a common ACL and protocol
2. a common format for the content of communication: content representation language
3. a common ontology.
There are several definitions of ontology have been introduced (Weiss, 1999; Noy and McGuinness, 2001). Some of them have been used and some of them are contradictory. A definition accepted in the MAS area, says that an ontology is a formal representation of concepts, characteristics and relations in each specific domain, allowing the common agreement of the people and software agents, and enabling a machine to use the knowledge of some application, multiple machines to share knowledge and still enabling the knowledge reuse. Ontologies play a key role in communication in distributed MAS, because they can provide and define a shared vocabulary about a definition of the world and terms used in agent communication. Ontology mapping is a primary problem that has to be solved in order to allow agents with different backgrounds to adjust themselves before starting any form of cooperation or communication. Using a common ontology is impractical, because it would result in assuming a standard communication vocabulary and it does not take into account the conceptual requirements of agents that could appear in future (Trojahn et al., 2008). In order to reach interoperability, two problems must be dealt with, namely: structural heterogeneity and semantic heterogeneity (Wache et al., 2001).
Several projects have used ontologies in agent-based systems. Tian and Cao (2005) presents an ontology-driven multi-agent architecture that supports the sharing and reuse among different types of knowledge acquisition agents. Other projects use ontologies to describe agent behaviour. Laclavik et al. (2006) has developed an architecture using a semantic knowledge model to define the behaviour of agents. Şandru et al. (2005) proposes a generic multi-agent task-oriented architecture based on a formal model described using the unified problem solving method description language (UPML) (Fensel et al., 2003). Gómez et al. (2001) also describes a UPML-based framework to build information agents by reusing a library of domain-independent problem solving components. Hajnal et al. (2007) developed software to generate JADE agent code from ontology-based descriptions for the K4Care system (K4CARE, 2008). There are different approaches have been proposed to solve the ontology mapping problem and some comprehensive surveys of some famous ontology mapping systems were introduced too, such as GLUE (Doan et al., 2002), QOM (Ehrig and Staab, 2004), PROMPT (Noy and
The most common systems that participated in ontology alignment evaluation initiative (OAEI) campaign are: Falcon-AO (Qu et al., 2006) is a similarity-based generic ontology mapping system. It consists of three elementary matchers, i.e., V-Doc, I-Sub, and GMO, and one ontology partitioner, PBM. V-Doc constructs a virtual document for each URIref, and then measures their similarity in a vector space model. RiMOM (Tang et al., 2006) is a general ontology mapping system based on Bayesian decision theory. It utilises normalisation and NLP techniques and integrates multiple strategies for ontology mapping. LILY (Wang and Xu, 2007) is a generic ontology mapping system based on the extraction of semantic sub-graph. It exploits both linguistic and structural information in semantic sub-graphs to generate initial alignments. Then a subsequent similarity propagation strategy is applied to produce more alignments if necessary. ASMOV (Jean-Mary and Kabuka, 2007) is an automated ontology mapping tool that iteratively calculates the similarity between concepts in ontologies by analysing four features such as textual description and structure information. It then combines the measures of these four features using a weighted sum. The weights are adjusted based on some static rules. PRIOR+ (Ming et al., 2008) introduces a new generic ontology mapping approach; the approach measures both the linguistic and the structural similarities of the ontologies. More specifically, three kinds of similarity, i.e., edit distance-based similarity, profile similarity and structural similarity, are calculated.
Figure 1 The proposed communication layers (see online version for colours)
3 The proposed agent communication layers
The process of communication between agents in DMAIS is divided into five main layers, each layer provides information needed to the layer above it, and receives information from the layer below it, the proposed five layers is shown in Figure 1. The communication layers are constituted based on the message transport protocol [transmission control protocol/internet protocol (TCP/IP)] extend it by the network
infrastructure layer, ACL, content language and the ontology layer. Each layer is illustrated below:
3.1 Message transport protocol layer
Message Transport layer is the lowest layer in the proposed agent communication layers. Since data transmission from source machines to destination machines through networks is realised in this layer, so this layer can be recognised as the transport service provider. This layer is the physical world consisting of agent host machines. There are many protocols that can be used as a standard for the network transport service provider, namely TCP/IP, UDP, HTTP, FTP, IIOP, RMI, and SMTP. Hence, the communication between agents, which located on distributed machines, must be constituted by some kind of network protocol. In the proposed agent communication layers, the TCP/IP is used.
3.2 Network infrastructure layer
The network infrastructure layer above the message transport layer plays a vital role in connecting the agent layers (the logical layer) with the network layers (the physical layer), starting by the message transport layer. Such connection is guaranteed and realised by two well-defined interfaces, one between the agent layers and this infrastructure, and the other between this infrastructure and the network layer (message transport layer). Most of agents are running on different machines and need to communicate and exchange different kinds of data over the networks. If the DMAIS is directly built on the distributed network, this makes the communication between agents has great overhead, cost and time consuming. So, this layer is proposed to solve the problem that might occur by building this interface that makes the agents not aware of the physical-network issues. Once this layer has been defined, agents can send and receive messages to/from any other agent without concern about network issues. So, when the messages are sent from agents to any message transport layer of a specific agent, they should be delivered to their destination without further interaction with agents. This raises the need to provide two fundamental services agent name resolution and agent message delivery in this layer. Finally, this layer provides transparent support for network communication between agents. This transparency makes the agents deals with the problem of when and with whom to interact only, and leaves the problem of how to interact to the network layer (message transport layer). In this way, the agent layers (the logical layer) and the network layer (the physical layer) can be independently implemented.
3.3 ACL layer
To establishing a communication between any two agents, they communicate by languages. At present there are two mainstreams in communication languages. One is FIPA-ACL (Labrou et al., 1999) proposed by European FIPA institute., the other is KQML [7] proposed by knowledge share effort (KSE) research group of American DARPA, and based on the linguistic theory of the speech act. To express communication between agents in the ACL layer, the KQML is used. The KQML is a high-level communication language and protocol for exchanging information and sharing
knowledge, which provides the basic format of expressing and processing messages and supports sharing information among agents (Farooq, 2002), it was conceived both as a message format and a message handling protocol to support run-time knowledge sharing among agents.
3.3.1 Knowledge query and manipulation language
In this layer, KQML language is chosen to be the internal format of the agent’s messages, and then this message will be translated to any other language according to the destination agent. The main advantages that motivate us to choose the KQML language:
1. KQML messages are declarative, simple, readable and extensible.
2. Since it has a layered structure and KQML messages are unaware of the content of the message they carry, KQML can easily be integrated with other system components.
3. KQML is independent of the network transport mechanisms (TCP/IP, SMTP, IIOP, etc.) and the content languages (SQL, PROLOG, SL, etc.).
4. KQML has the potential to enhance the capabilities and functionality of large scale integration and interoperability efforts in communication and information technology.
Any KQML message structure consists of three layers: content, message and communication layers.
- **The content layer**: It contains the actual content of the message specified in any language. KQML supports for ASCII language and binary code symbols.
- **The message layer**: It is the core of KQML. Its basic function is a protocol to identify the protocol used to send a message. In addition, it also includes optional part described content information, such as languages, communication theme descriptor. So that, KQML can analyse, route and deliver the content.
- **The communication layer**: It realises news features, uses specific identifier to mark parameters of low-level message; it consists of low-level communication parameters, such as sender, receiver, and message identities.
3.3.2 The speech act theory
The Artificial Intelligence researchers exploited the speech act theory to model communication between software agents. Austin suggests that the role of languages in communication is to impart actions (Labrou and Finin, 1996). Speakers do not simply utter sentences that are true or false, but rather perform speech actions such as requests and suggestions. Consequently, all utterances are speech acts, i.e., they are actions of some sort. The speech act theory (Labrou and Finin, 1996; Searle et al., 1980) considers three aspects of utterances. **Locution** refers to the act of utterance itself. **Illocution** refers to the ‘type’ of utterance as in a request to turn on the heater. It conveys the speaker’s intentions to the listener. **Perlocution** refers to the effect of an utterance, i.e., how it influences the recipient.
3.4 The content layer
Above the agent communication layer, there is the content language layer that contains the actual information of a message. Different content languages within a single agent or MAS can be used like semantic language (SL), SQL, PROLOG or any other representation means. In this layer, knowledge interchange format (KIF) is used.
3.4.1 Knowledge interchange format
The KIF is used in this platform, it is a general purpose content language developed in knowledge sharing effort. The Interlingua Group is developing a common language for expressing the content of a knowledge-base. This group has published a specification document describing the KIF (Genesereth and Fikes, 1992). KIF was chosen for many reasons:
1. It is a recognised representation format for agent ontologies.
2. It is extremely expressive language, supporting full first-order logic. The expressiveness of KIF allows the exchange of information with arbitrary complexity and with variable degrees of completeness.
3. The semantics of sentences written in KIF is declarative and interpreter independent, so there is no possible ambiguity exists in the information exchanged (indirectly) between agents.
4. Defining a mapping from KIF to XML is more acceptable.
5. It can be used to support translation from one content language to another, or as a common content language between two agents that use different native representation languages.
3.4.2 Dialogues
After defining the content language layer that contains the actual message, these messages between agents are grouped into Ds. A D must be established first, when any agent wants to communicate with another agent. The benefit of using these Ds in the proposed communication layers is that when an agent wants to communicate with two or more agents in the same time, the agent can maintain several Ds at a time with the same or with different receiving agents.
3.5 The ontology layer
The ontology layer is used to define a common vocabulary for agents to communicate with one another, it is used to represent the content in the messages exchanged among agents to reduce the conceptual and terminological confusion that often appear among different people and organisations. Ontology determines the semantics of the concepts used in the content language. The actual meaning of the message content is captured in the ontology layer. This layer gives detailed definitions of the syntax and the semantics of the message, these definitions is vital in the ontology mapping process. To enable mapping of different ontologies, the ontology layer must provide an explicit description of ontologies in a way that is understandable to agents. So when agents want to
communicate, they have to share the ontology used for communication by using this layer. This means that this layer in each agent must do the following:
1. specify the conceptualisation of the domain they operate on
2. share the specification of this conceptualisation by recognising the same objects and the same relations among the objects in the domain they operate on
3. use the same language and the same words for describing the same objects
4. define how to model this domain together with possible restrictions.
The concept of ontology is used for the formalisation of knowledge in terms of classes, properties, instances and the relations. There are many ontology languages proposed as a formal language to encode the ontology, such as, resource description framework (RDF) (Lassila and Swick, 1999), RDF schema (RDFS) (http://www.w3.org/TR/1998/WD-rdf-schema-19980409/), ontology inference layer (OIL) (Fensel et al., 2001), DAML+OIL (2001), or web ontology language (OWL) [35]. The development of the most used ontology languages is shown in Figure 2. In this layer, the web ontology language OWL is used.
Figure 2 The development of the ontology languages (see online version for colours)
3.5.1 The web ontology language
The OWL (McGuinness and Van Harmelen, 2004) is a language used to describe ontologies in a form of classes and relations among them together with further restrictions and intended use of them. It is designed primarily for the WWW documents and applications by W3C (http://www.W3C.org); it is characterised by formal semantics and RDF/XML-based serialisations for the semantic web, but it can be used for any other domain as well. OWL is built on RDF/RDFS and uses the XML schema constructs (Thompson et al., 2001). It is not simply a language for a message format, like XML language, but it is a language intended for knowledge and ontology representation. OWL is chosen to be the language used to represent the ontologies in the proposed ontology layer for bunch of reasons:
1. The OWL is going to be standardised by W3C provides an ontology modeling language with defined formal semantics.
2. OWL provides a standard way to explicitly express the semantics. XML itself does not provide any explicit description of intended use of data.
3. By explicitly expressing the semantics this can lead to separating the semantics from the code and thus leads to writing less code for the implementation of the
communicating agents. OWL provides a common language to define semantics so that anyone can understand it.
4 OWL uses the XML syntax and builds on RDF syntax and semantics, i.e., it builds on existing widely used technologies so that common XML and RDF tools can work with OWL as well.
4 The proposed MPCS architecture
In the proposed DMAIS framework (El-Desouky and El-Ghamrawy, 2008), there are eight main modules that make the agents have the ability to interact and coordinate with each other. The most critical module and the kernel of DMAIS is the communication module; it is responsible for all the interactions between agents, as well as enables communication between other modules in DMAIS, by means of message passing. It plays an essential role for agents to exchange information and to coordinate their actions. In this sense, a communication module in DMAIS is proposed to control the communication process in DMAIS. First a MPCS is designed as a modular architecture, as shown in Figure 3, that permits flexibility, scalability and interoperability, in which it allow the system to be more extensible.
Figure 3 The proposed MPCS (see online version for colours)
MPCS has been designed with a decentralised architecture, this done by distributing the functions of the system among different interchangeable modules based on using separate communication layers for each agent, as showed in Figure 3, this leads to ensure the efficiency of the system by avoiding the bottlenecks that might occurs in using centralised architecture. The agents involved in the communication may be local to a single Platform or on different platforms. Two modes of communication are involved for message delivery in MPCS which includes local and global communication. There are several advantages of MPCS platform:
1 First, allowing agents written in different languages to send and receive messages using the KQML standard.
Second, provide a highly flexible and scalable system which support large number of agents to be loaded. In this way, agents developed in any other platforms can communicate, providing that the necessary modules have been implemented.
MPCS has the property of distributed architecture, as its functions are distributed among different interchangeable modules. This distributed feature, thanks to the network transparency, is naturally and easily obtained.
Furthermore, MPCS has reliable and fault tolerant features, and it can be easily be developed.
The core modules of MPCS distributed on multiple machines. This ensures that the failure of one machine will not cause the whole system to come down and does not affect the agent system working on its current machines.
These advantages have been achieved through the use of interchangeable modules as shown in Figure 3, which ensure efficiency and avoids bottlenecks by distributing the function of the system among different modules. MPCS has three main phases, that groups the modules and sub-modules that has direct interactions with each other to achieve a specific task. The three modules are illustrated in the next sub sections.
4.1 Initiation phase
This phase contains the modules that are responsible for the interaction and registrations of agents and it contains the central unit of control that distributes the tasks to the modules that can accomplish it.
- The task distributor (TD) (central unit of control) is the core of the system. This module ultimate goal is to distribute the main tasks of the platform to the appropriate modules. It also creates a list of all the agents that are using the system.
- The agent registration (AR) module responsible for the registration of all the agents connected to the system at a given time. Any agent that wants to communicate with other agents needs to register in the system through this module.
- Agents interact directly through agent interface (AI). So, the communication platform is separated from the agents, in order to simplify the management of the platform. The Interface has been designed to provide a dynamic interface to the programmer to utilise the features of AR module.
In this phase, a library of interaction protocols has been provided allowing MPCS agents to communicate based on KQML specifications.
4.2 Handling phase
Handling phase contains the modules that are responsible for handling the messages between agents. The messages between agents are grouped into Ds. If an agent wants to communicate with another agent, it needs first to create a D with it, to which all subsequent messages between both are sent/received. An agent can maintain several Ds at a time with the same or with different receiving agents. The main goal of the MPCS is to provide a high-level management to the Ds between agents registered in the system to ensure the delivery of messages between them.
Distributed multi-agent communication system
• Each agent has a dialogue master (DM) module to manage all these Ds. Once this module registered in the system, it is automatically created. This module, with cooperation of others, responsible for the maintenance of the agent’s Ds, creating new Ds or sending/receiving messages within a D.
The DM can potentially receive a large number of D requests. To avoid bottlenecks this module distributes its work among D sub-masters modules.
• A dialogue sub-master module exists for each pair of sending and receiving agents, responsible for creating Ds and assigning messages to them.
• The policy manager (PM) module: its goal is to check whether a message is allowed in a given D. It can permit any message sequence within a D or limit it to a specific course or several action courses. For example, it might be restrict the communication between specific agents to a simple question/answer D, while other agents might be allowed free communication. This can be achieved by using the library of the interaction protocols (FIPA Interaction Protocol Library, 2001), the protocols may range from simple query and request protocols, to more complex ones.
For this reason, the PM is a completely independent module that can adapt to user requirements.
4.3 Processing phase
Processing phase contains the modules that is responsible for sending the message to a specific agent by checking its address if it’s in different platform other than the sending agent’s platform. And if the message received written in different language, this phase is responsible for translating this message.
• A message distributor (MD) module, similar to the DM, is responsible for distributing the messages, avoiding a possible bottleneck by delegating the request to send a message to a message sub-distributor (MSD).
• This MSD is responsible for processing all the messages that belong to the same destination platform. The system will contain as many MSDs as platforms with which any communication is maintained.
• The message translator (MT) module serialises the message from the internal format of the system to the format of the destination platform. Its main goal is to detect the communication language used by the sender agent, if the language used is different from the receiver agent, or vice versa, then a translation between those different languages must be done with cooperation with the ontology manager module. The data flow diagram for the translation approach is shown in Figure 4.
• The transport protocol manager (TPM) module is defined in order to send/receive messages to/from other platforms. This module is used depending on the destination platform. There is an ability to insert new transport protocols, by separation of the code in an independent module, in order to allow different platforms to be able to communicate with each others. External agents can either use the implemented default transport protocol, or another protocol. The default protocol can be used to modify the necessary communication parameters in order to produce a new protocol.
The ontology manager module: stores this information about the ontology and provides the facilities that system administrators need to set up and evolve the Ontology. Additionally, it provides means for defining mappings between autonomous ontologies.
This module uses the DOMAC system proposed in the next section to perform the mappings between different ontologies. The ontology manager has two main roles: First, it distributes copies of ontologies to requesting agents. Second, it informs committing agents of changes in ontology. The ontology manger module manages the whole Ds trying to help when it is needed, it provides some services:
1. the ability to translate expressions between two different ontologies
2. learn with the ontology services already provided so that it could use this information in a future negotiation
3. the capability for defining, modifying and deleting expressions and ontology definitions.
The communication language we are using to implement the module is KQML. The ontology manager module includes a basic domain ontologies represented in OWL.
Figure 4 The data flow diagram for the MT module (see online version for colours)
4.4 How the message is handled in MPCS
A description of how message is transmitted is represented in Figure 5; this example is used for better understanding of the function of the proposed platform. The sending agent must register first and also that the corresponding D must be previously created. First, the sending agent indicates to AI that it wants to send a message to another agent through a D. Then, the AI delegates the message transmission to the DM, which in turns assigns the message to the dialogue sub-manager (DSM).
**Figure 5** Sending local or global message in MPCS (see online version for colours)
Both agents may maintain several Ds at the same time, so the sending agent must inform the DSM of the D to which the message is to be delegated. Then, the D verifies the message state with the help of the PM. If the state of the message makes semantic sense then the D communicates with the TD to locate the receiving agent. When the TD receives the outgoing message, it checks the agent’s location. If the sending and the receiving agents are in the same platform then it will send the message without the help of MD, otherwise if the receiving agent is in an external platform, then the TD delegates the outgoing message transmission to the MD, which will search the MSD corresponding to the destination platform.
Figure 6 The interaction messages between MD, MSD, AR and MT modules (see online version for colours)
If the MSD does not exist, it is created at this point, with the necessary parameters to establish the communication obtained from the AR module. Once the MSD has been located, it is assigned with the message to be sent, and in turn assigns the message to the appropriate MT. The interaction between these four modules, namely, MD, MSD, AR, and MT, is shown in Figure 6. The message is translated into the format acceptable to the destination platform, as showed in Figure 5. Then it will be sent by the MSD, then to the
TPM used for communication with the given platform. Finally, the TPM sends the outgoing message to the agent in the other platform. A preliminary experiment is then conducted in Section 6, indicating that MPCS has the scalability advantage, as it behaves efficiently under full-load conditions comparing to recent systems.
5 The proposed DOMAC
Ontology mapping takes two ontologies as input and creates a semantic correspondence between the entities in the two input ontologies. The ontology manager, described in the previous section, will monitor and help the communication process at the moment when it is happening, without having to do a mapping of all the ontologies involved. As shown in Figure 7, there are two ontologies (Ontology1 and Ontology2) belonging to two agents Agent1 and Agent2, respectively. Suppose that Agent1 need some information to complete its task, and it knows that the information is probably available in a database managed by Agent2. As a result, Agent1 sends a message to Agent2; the message is translated and sent to Agent2.
Figure 7 Ontology mapping between two agents (see online version for colours)
Then both agents must detect first, if both use the same ontology or whether use different ontologies. If they use different ontologies and if no mapping is known, the agents should try to establish a mapping. Then alignment between ontology1 and ontology2 is established to generate a link between them and send this link to Agent2 to manage it in understanding Agent1 message. In this sense, there must be an ontology mapping algorithm used in the ontology manager proposed in the MPCS. As a result, a DOMAC is proposed to show agents how to establish a mapping between two ontologies. The DOMAC is shown in Figure 8; its main goal is to map different ontologies. The input of DOMAC is two ontologies, O1 and O2, stored in ontologies repository, expressed in the form of formal taxonomies or ontologies, the language used for describing the ontologies is the OWL. The output is a mapping, also called the mapping result, between the input taxonomies or ontologies. Mapping can be represented in different ways depending on its use. For example, mappings can be represented as queries, bridging axioms or an instance in a mapping ontology.
5.1 The input phase
The input to the DOMAC is the heterogeneous ontologies stored in the Ontologies Repository, and these different ontologies are going to be mapped by the DOMAC system. The ontologies stored in this repository expressed in OWL language, OWL is built on RDF/RDFS and uses the XML Schema constructs. The repository built by three main ways:
1. Downloading the ontologies from the ontology libraries. The protégé ontology library (Grosso et al., 1999) is used; it is a free, open source ontology editor and knowledge base framework. The protégé-OWL editor is an extension of Protégé that supports the owl. It enables users to: load and save OWL and RDF ontologies.
2. The translated ACL messages to OWL ontologies by the MT module in the MPCS, as showed in Figure 3.
3. The messages sent by external agents in form of OWL ontologies to the MPCS.
Figure 8 Dynamic ontology mapping system for agent communication (see online version for colours)
5.2 The DOMAC processing phase
There are four main modules in the processing phase of DOMAC, the first module is the Parsing Module, and its main goal is to deal with the OWL ontologies stored in the ontologies repository. First the XML converter converts the OWL message into ontologically annotated XML document (i.e., The content of the OWL message have been encoded to XML document), this is because parsing the OWL messages is a big overhead in agent development and XML encoding is easier to develop parsers as anyone can use off-the-shelf XML parsers. The XML-encoding enhances the canonical syntactic encoding. Then the XML ontologies will be parsed and pre-processed by removing stop words, stemming, and tokenising. Then the parsing module sends the parsed document to the similarity computation module, which measures three kinds of similarity: edit distance similarity, cosine similarity and structural similarity.
• The *edit distance-based similarity* (Mao et al., 2007) is calculated between the names of elements based on their Levenshtein distance. The similarity is defined as:
\[
\text{NameSim}(e_{1i}, e_{2j}) = 1 - \frac{\text{EditDist}(e_{1i}, e_{2j})}{\max(l(e_{1i}), l(e_{2j}))}
\]
(1)
where the \(\text{EditDist}(e_{1i}, e_{2j})\) is the Levenshtein distance between elements \(e_{1i}\) and \(e_{2j}\), and \(l(e_{1i})\) and \(l(e_{2j})\) are the string length of the name of \(e_{1i}\) and \(e_{2j}\) respectively.
• The *structural similarity* (Mao et al., 2007) between two elements comes from their structural features (e.g., the number of direct property of a class). The structural similarity of the classes in two ontologies is defined as follows:
\[
\text{StructSim}(e_{1i}, e_{2j}) = \frac{\sum_{k=1}^{n}(1 - \text{diff}_k(e_{1i}, e_{2j}))}{n}
\]
(2)
where \(e_{1i}\) and \(e_{2j}\) are two class elements in ontology \(O_1\) and \(O_2\) respectively, \(n\) is the total number of structure features, and the \(\text{diff}_k(e_{1i}, e_{2j})\) denotes the difference for feature \(k\), and its defined as:
\[
\text{diff}(e_{1i}, e_{2j}) = \frac{|\text{sf}(e_{1i}) - \text{sf}(e_{2j})|}{\max(\text{sf}(e_{1i}), \text{sf}(e_{2j}))}
\]
(3)
where \(\text{sf}(e_{1i})\) and \(\text{sf}(e_{2j})\) denote the value of structure features of \(e_{1i}\) and \(e_{2j}\) respectively.
• *Cosinesimilarity* is a non-Euclidean distance measure between two vectors (Jiayi et al., 2008); it is a common approach to compare documents in the field of text mining. Given two feature vectors \(i\) and \(j\), the similarity score between concepts \(i\) and \(j\) is represented using the dot product as follows:
\[
\text{CosSim}(i, j) = \frac{c_i \cdot c_j}{||c_i|| \cdot ||c_j||}
\]
(4)
The resulting score is in the range of \([0, 1]\) with 1 as the highest relatedness between concepts \(i\) and \(j\).
Then for each similarity computed in the similarity computation module, harmony estimator estimated a measurement of harmony in the estimator module; it is used to provide a measurable number that can tell which similarity is more reliable and trustful so that we can give it a higher weight during aggregation. To establish the joint attention, agent1 makes an announcement containing a unique representation of a concept and instance of the concept. After agent2 receive the announcement, it investigates whether it has a concept of which an instance matches to a certain degree the communicated instance, by measuring the proportion of words that two instances have in common. The instance with the highest proportion of corresponding words, together with the communicated instance, the joint attention provided that the correspondence is high enough. Then the estimator module sends the result to the mapping module, which its main goal is to establish mapping between the primitive concepts that make up the
concept. The process of generating mapping from O1 to O2 is known as dynamic ontology mapping.
5.3 The output phase
After applying the proposed dynamic ontology mapping, the following is the form of the mapping results from agent1 to agent2, the result of this process is called Mapping:
\[ A1.\text{node.Instructor.has.firstname} \leftrightarrow A2.\text{Node.lecturer.has.name}. \]
\[ A1.\text{node.Instructor.has.Phone number} \leftrightarrow A2.\text{Node.lecturer.has.cell phone}. \]
6 Experimental evaluation
A number of experiments were performed in two stages to validate the effectiveness of the proposed MPCS and the DOMAC.
6.1 Stage I: validation of the MPCS architecture
To investigate the effects of MPCS, a MAS has been implemented to provide a testing platform. The whole system is implemented on a 5 PC’s with an Intel Pentium 4 processor at 300 GHz, with 2 GB of Ram, connected with network Ethernet 512 Mbps. A network of cooperative agents is designed, the number of agents range from 100 to 1,000, depending on the specific test. The experiments are focused on evaluating the scalability of the system with an increasing number of agents. Generally, scalability refers to how well the capacity of a system to do useful work increases as the size of the system increases. Thus, in distributed software engineering, the term ‘scalability’ is sometimes used when ‘increased environmental loading due to an increase in the number of distributed components’ is arguably more appropriate. Thus, the scalability of MAS is the average measure of the degree of performance degradation of individual agents in the society as their environmental loading, caused by an expansion in the size of the society, increases. The scalability has been achieved by using threading feature. So, any of the modules described in MPCS architecture that may act as bottlenecks are executed as a separate thread. There are two experiments are done to test the scalability in local and global communication. For each experiment, several parameters have to be specified: number of agents: It is easily seen that the number of agents is one of the most important parameters in a MAS experiment. Number of hosts: the number of hosts is limited by the available resources only. Agent platform: whether it is a local agent (in same platform) or global agent (in different platform). Computational time in milliseconds
6.1.1 Experiment 1: test scalability of local communication in MPCS
First experiment: in the local communication, when the sending and receiving agents are in the same platform, the MPCS are compared with two systems: JADE (Hajnal et al., 2007) and MOZART (Suarez-Romero et al., 2005) system, as shown in Figure 9, and the computational time for the message delivery is measured when increasing the number of agents.
6.1.2 Experiment 2: test scalability of global communication in MPCS
Second experiment: in the global communication, when the sending and receiving agents are in the different platform, MPCS is compared with JADE and MOZART system, as shown in Figure 10, and also the computational time for the message delivery is measured when increasing the number of agents.
From the figures, it is observed that the computational time for delivering a message increases with the number of agents increases, as expected, but linearly. As can be observed also, our MPCS, behaves better for both measures than JADE and Mozart systems especially when managing many threads. JADE does not scale well for simulation sizes involving a large number of agents. The major reason for this is the inefficiency of the JADE agent directory service. Because this services used frequently by the other platform services, its inefficiency affects other services too. In global communication may cause substantial delays. When a message is delivered, the JADE message transport service needs to know the receiver agent’s status (whether it is active or dead) and address (if it is on the same node or not) by accessing the directory service every time. Since the default directory service which employs LDAP has slow response behaviour, it is overwhelmed by a large number of concurrent requests (Wang et al.,
This experiment shows that our MPCS architecture has the scalability advantage, as it behaves efficiently under full-load conditions comparing to recent systems.
6.2 Stage 2: validation of the DOMAC
A number of experiments were performed to validate the effectiveness of the proposed DOMAC. In each experiment, we used the precision and recall to evaluate the experiment results which can be defined as follows:
\[
\text{Precision} : \quad P = \frac{|B \cap A|}{|A|} \\
\text{Recall} : \quad R = \frac{|B \cap A|}{|B|}
\]
In the formulas above, \( A \) presents the number of correct mappings recognised by algorithm, \( B \) presents the number of reference mappings. There is always a tradeoff between precision and recall. Therefore, \( F \)-measure is leveraged to combine both metrics. It is a weighed harmonic mean of precision and recall. In other words, it is the weighed reciprocal of the arithmetic mean of the reciprocals of precision and recall. It is computed as:
\[
F\text{-measure} = \frac{2 \times (\text{Precision} \times \text{Recall})}{\text{Precision} + \text{Recall}}
\]
Figure 11 The precision and recall of DOMAC, JA and KMS (see online version for colours)
6.2.1 Experiment 1: test performance of DOMAC modules
In experiment 1, we evaluated the performance of the joint attention module. First ontology1 and ontology2 were randomly generated. Taking into account, for each ontology there are 1,000 instances. Given these ontologies, the agents established a
mapping between them. Finally, the experiments were carried out for different sizes of the set of words. The precision and recall were determined for the joint attention. To evaluate the joint attention module in our DOMAC, we compare our results with the results obtained by JA (Floris and Nico, 2004) and KMS (Ruixue and Zhanhong, 2008), as shown in Figure 11. The experiment results show that our DOMAC performance is better than the system developed in JA and KMS there are two reasons for this: first Using XML document helps better address the pragmatic aspects through the use of links. Links point to additional information. Links can assist with ontological problems (defining and sharing ontologies). Links can point to agent capability and identity information, protocols, even semantics. Second, the similarity computation module (the input to joint attention module) consists of three kinds of similarity: edit distance similarity, cosine similarity and structural similarity. And those three similarity kind listed (Ming et al., 2008) to be the most effective and reliable than most of similarity methods.
6.2.2 Experiment 2: evaluate ontology mapping approach in DOMAC
In experiment 2, to evaluate our ontology mapping approach in DOMAC we use the benchmark tests from OAEI, OAEI 2008 (http://oaei.ontologymatching.org/2008/), ontology matching campaign 2008. We choose it for many reasons:
a The annual OAEI campaign has become an authoritative contest in the area of ontology mapping, and thus attracts many participants including both well-known ontology mapping systems and new entrants.
b The campaign provides uniform test cases for all participants so that the analysis and comparison between different approaches is practical.
c The ground truth of benchmark tests is open.
Thus we can use it to comprehensively evaluate different components of our approach. We concerned in this experiment with the ontology Mapping approach, so we compare ours with recent most common systems that participated in OAEI campaign. Figure 12 shows the comparison between the precision, recall and f-measure of the DOMAC and top ranked systems on benchmark tests in OAEI campaign.
Figure 12 The comparison between DOMAC and top ranked systems (see online version for colours)
7 Conclusions and future work
In order to make possible interaction between agents in DMAIS, it is necessary to have a communication platform, a communication language and an ontology mapping system. In this sense, an outline of the communication between agents has been described by mean of the proposed communication layers, a MPCS Architecture is proposed to provide a highly flexible and scalable system that allows agents written in different languages to send and receive messages using the KQML standard. A DOMAC is also proposed based on different mapping approaches. A survey of recent work in communication in MAS is reviewed; also an outline of the researches uses ontology is presented. In addition, some comprehensive surveys of some famous ontology mapping systems were introduced too. A preliminary experiment is then conducted, indicating that DOMAC can be evidently helpful to discovering semantic mappings for dynamic agent-based ontology. And other experiments are used to indicate that MPCS has the scalability advantage, as it behaves efficiently under full-load conditions comparing to recent systems. As future work, we plan to propose new interaction protocols in our architecture. In addition, we intend to present an agent negotiation model for ontology mapping.
References
Available at http://oaei.ontologymatching.org/2008/.
Distributed multi-agent communication system
|
{"Source-Url": "https://www.researchgate.net/profile/Sally_Elghamrawy/publication/262355620_Distributed_multi-agent_communication_system_based_on_dynamic_ontology_mapping/links/54c0ff960cf21674cea1d1d3.pdf", "len_cl100k_base": 10418, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 53993, "total-output-tokens": 14099, "length": "2e13", "weborganizer": {"__label__adult": 0.0003266334533691406, "__label__art_design": 0.0005865097045898438, "__label__crime_law": 0.0004625320434570313, "__label__education_jobs": 0.002613067626953125, "__label__entertainment": 0.00021135807037353516, "__label__fashion_beauty": 0.0002123117446899414, "__label__finance_business": 0.0006203651428222656, "__label__food_dining": 0.00036835670471191406, "__label__games": 0.0008153915405273438, "__label__hardware": 0.00119781494140625, "__label__health": 0.0007500648498535156, "__label__history": 0.00048732757568359375, "__label__home_hobbies": 0.00015044212341308594, "__label__industrial": 0.000705718994140625, "__label__literature": 0.000850677490234375, "__label__politics": 0.00047516822814941406, "__label__religion": 0.0005927085876464844, "__label__science_tech": 0.469970703125, "__label__social_life": 0.00021827220916748047, "__label__software": 0.0302734375, "__label__software_dev": 0.48681640625, "__label__sports_fitness": 0.0002663135528564453, "__label__transportation": 0.000782012939453125, "__label__travel": 0.0002529621124267578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58282, 0.03241]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58282, 0.72267]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58282, 0.89435]], "google_gemma-3-12b-it_contains_pii": [[0, 2745, false], [2745, 6533, null], [6533, 10091, null], [10091, 12228, null], [12228, 15379, null], [15379, 18159, null], [18159, 20870, null], [20870, 23312, null], [23312, 25236, null], [25236, 28155, null], [28155, 31265, null], [31265, 32433, null], [32433, 33052, null], [33052, 34393, null], [34393, 36675, null], [36675, 38569, null], [38569, 41491, null], [41491, 44322, null], [44322, 45704, null], [45704, 47194, null], [47194, 49481, null], [49481, 52811, null], [52811, 56312, null], [56312, 58282, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2745, true], [2745, 6533, null], [6533, 10091, null], [10091, 12228, null], [12228, 15379, null], [15379, 18159, null], [18159, 20870, null], [20870, 23312, null], [23312, 25236, null], [25236, 28155, null], [28155, 31265, null], [31265, 32433, null], [32433, 33052, null], [33052, 34393, null], [34393, 36675, null], [36675, 38569, null], [38569, 41491, null], [41491, 44322, null], [44322, 45704, null], [45704, 47194, null], [47194, 49481, null], [49481, 52811, null], [52811, 56312, null], [56312, 58282, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58282, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58282, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58282, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58282, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58282, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58282, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58282, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58282, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58282, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58282, null]], "pdf_page_numbers": [[0, 2745, 1], [2745, 6533, 2], [6533, 10091, 3], [10091, 12228, 4], [12228, 15379, 5], [15379, 18159, 6], [18159, 20870, 7], [20870, 23312, 8], [23312, 25236, 9], [25236, 28155, 10], [28155, 31265, 11], [31265, 32433, 12], [32433, 33052, 13], [33052, 34393, 14], [34393, 36675, 15], [36675, 38569, 16], [38569, 41491, 17], [41491, 44322, 18], [44322, 45704, 19], [45704, 47194, 20], [47194, 49481, 21], [49481, 52811, 22], [52811, 56312, 23], [56312, 58282, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58282, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
774bcdd12006c0f3053f7106141ddfe0a41e6866
|
Using Kinect to interact with presentation software
Marcel Aarts
Supervisors:
Oguzhan Özcan (MDH)
Christian Sjöström (Imagination Studios)
Examiner:
Rikard Lindell (MDH)
Mälardalens högskola, Västerås
School of Innovation, Design and Engineering
Imagination Studios, Uppsala
May 2012
Abstract
Imagination Studios is a company specialized in motion capturing and animation. Part of their daily business is working at trade shows where they have a booth to keep close contact with existing customers and also to find new ones. However, usually only two to three people will be working at the booth, and frequently, these people will be in meetings with potential customers. During a time like this, nobody is free to attend to other people checking out the booth. This can result in a potential loss of a new customer. This project seeks a way to alleviate that problem.
The idea behind this project was to create an application that trade show visitors can interact with in a playful and innovative way while also giving them a feel of what Imagination Studios is all about while looking for information about the company. To do this it was decided to let users interact with the system by using a Microsoft Kinect. The Kinect allows for easy implementation of a user interface based on motion capturing while also being very cost effective. A new user interface was to be designed as well, without copying already existing solutions and without simply expanding a traditional UI with new elements. To achieve this several design sketches were made, and the most interesting ones were then turned into storyboards. These were then used to decide on the final design, which was then elaborated on by use of video sketches and a collage in Adobe Photoshop.
Several tools were used during the actual implementation. For the actual visualization and graphical design, the Unreal Engine 3 in combination with UDK was decided upon. To connect Kinect and Unreal Engine 3, a third party addon called NIUI which makes use of the open source SDK OpenNI was used. For ease of debugging and programming in Unrealscript, the programming language used by the Unreal Engine 3, an addon for Microsoft Visual Studio 2010 called nFringe (Pixel Mine, Inc., 2010) was used.
Table of Contents
Abstract .......................................................................................................................... I
Table of Figures ............................................................................................................. IV
1. Introduction ............................................................................................................... 1
2. Project Requirements .............................................................................................. 2
3. State of the Art Overview ....................................................................................... 2
4. Design Phase ............................................................................................................ 3
4.1 Concept .................................................................................................................. 3
4.2 Design process ...................................................................................................... 4
4.3 Gestural Control .................................................................................................... 5
4.4 Description of Designed Features ....................................................................... 6
4.5 Idle system ........................................................................................................... 6
4.6 User detection ...................................................................................................... 7
4.7 Data Access .......................................................................................................... 7
4.8 Contact Information ........................................................................................... 9
4.9 Other Developed Ideas ....................................................................................... 10
4.9.1 A city of data ................................................................................................... 10
4.9.2 The Data Tree ............................................................................................... 10
5. Hardware / Used Tools ........................................................................................... 11
5.1 Kinect Hardware .................................................................................................. 11
5.2 Kinect SDK .......................................................................................................... 12
5.2.1 Licensing ....................................................................................................... 12
5.2.2 Features ......................................................................................................... 12
5.2.3 Decision ......................................................................................................... 13
5.3 Game Engine Comparison ................................................................................... 13
5.3.1 Cryengine 3 .................................................................................................... 13
5.3.2 Unreal Engine 3 ............................................................................................. 14
5.3.3 Unity3D ......................................................................................................... 15
5.3.4 Decision ......................................................................................................... 16
5.4 Tools .................................................................................................................... 16
5.4.1 nFringe .......................................................................................................... 16
5.4.2 Visual Studio ................................................................................................ 16
5.4.3 Adobe Photoshop ......................................................................................... 17
5.5 Special techniques used during development .................................................... 17
5.5.1 Gesture Recognition .................................................................17
5.5.2 Detecting if hand indicators are hovering over a bubble. .......................18
5.5.3 Using opacity maps for hover effects ..............................................22
5.5.4 Bubbles and Animations ..................................................................24
6. Test and Evaluation ..............................................................................25
7. Conclusion and Outlook .........................................................................25
8. References ............................................................................................26
# Table of Figures
- Figure 1: Screenshot 1 from a video sketch ................................................................. 4
- Figure 2: Screenshot 2 from a video sketch ................................................................. 4
- Figure 3: Idle screen showing different menu options .................................................. 6
- Figure 4: Idle screen in the actual software ................................................................. 7
- Figure 5: Cells floating inside a life form ........................................................................ 8
- Figure 6: Information within a cell is being displayed .................................................... 9
- Figure 7: Cell containing QR-code ................................................................................ 10
- Figure 8: Infrared grid projected by the Kinect .............................................................. 11
- Figure 9: Kinect Camera ............................................................................................... 12
- Figure 10: Comparison of GPU vs. CPU PhysX ......................................................... 15
- Figure 11: The problem with perspective .................................................................... 19
- Figure 12: General hover detection algorithm .............................................................. 21
- Figure 13: Bubble without hover effect ........................................................................ 22
- Figure 14: Bubble with hover effect ............................................................................. 22
- Figure 15: Texture holding all text ............................................................................... 23
- Figure 16: Opacity Map ............................................................................................... 23
- Figure 17: Shader program used for hover effects ....................................................... 24
1. Introduction
Imagination Studios is a company specialized in motion capturing and character animation for the computer game industry. As this is a constantly changing business, it is very important to always be on top of the newest developments and to look for new customers, as well as keeping the old ones. One of the methods of doing this is having a booth on trade shows, where potential customers can have a first impression of the company, as well as booking meetings with the staff and leaving their contact information.
However, all staff will frequently be busy tending to clients, which means that potential customers may be ignored and even lost because nobody is around to help them and take their contact information.
To alleviate this problem, the idea was born to create an application which can interact with customers when staff is busy. This way, people can still find basic information on the company, and even leave their contact information without staff being around to help them. To make the application draw interest the idea came up that interaction should be done in a new and innovative way. As Imagination Studios are a company specialized in motion capturing, using this technique to interact with the system seemed the logical choice. Also, the whole interaction should be playful and encourage the user to explore.
A cheap and simple way to do motion capturing is using the Microsoft Kinect. It offers simple and reliable skeleton tracking as well as a choice of both proprietary and open source SDKs.
The goal of this project was to create a completely new kind of user interface that is designed from the ground up with the thought of gestural control in mind. It is very easy to expand a standard user interface designed to be controlled by keyboard and mouse with support for a device like the Kinect. However, doing this will often feel a bit clunky and completely ignores the strengths a natural user interface using gestures and motion can offer. Designing a completely new interface is no trivial task and is hampered by the simple fact that we have become so used to point and click interfaces used in operating systems like Microsoft Windows and Apple’s OSX. An extensive design process was needed with many sketches discarded early on in the process. This was a very important part of the project as it helped with the process of “thinking out-of-the-box”.
The focus of the project was laid on both the design phase and the implementation phase. The design phase was extensive to make sure that the final design was actually innovative but also to make sure that it was usable in an easy and logical way. During the implementation phase, the goal was to create a so called “working prototype”, meaning it should be possible to show all the details of human-machine interaction without the requirement of having the final content included in the program. A short testing phase was also done at the end of the project, however, the focus of the project was not on extensive user testing.
Special thanks go to Daniel Kade, who has been a tremendous help during the course of this project.
2. Project Requirements
During the design process, several different ideas were developed. Several ideas were rejected very early on, often during the initial sketching, as they did not meet the requirements of the project. The requirements of the project were as follows:
- Use of the Microsoft Kinect
- Interface based on natural user interaction, using gestures
- Visually appealing design
- Easy to learn, as it will be used on trade shows where nobody has the time or patience to spend any time on trying to learn how to interact with a system.
- Should be able to present any type of information to the user. This may include, but is not limited to, images, sound, videos and even other software.
3. State of the Art Overview
When researching the Kinect on libraries like IEEE and ACM it becomes clear very quickly that an astonishing amount of research is being done with this piece of hardware which was originally only created as an innovative game controller. If then narrowing down the search towards gesture recognition and its possible uses it quickly becomes obvious that while there are many papers on the topic of how to technically detect gestures, there are surprisingly few papers on the actual implementation and usage of said gestures. Examples of these technical papers are Biswas and Basu (Biswas & Basu, 2011) and Lai, Konrad and Ishwar (Lai, et al., 2012).
Vera et al. (Vera, et al., 2011) describe a system where they combine several sensors into a augmented reality experience. They use two Kinect devices for capturing a user’s location, a gyroscope in form of an Android smartphone to track head movements, a WiiMote for facial expressions and a microphone capturing the voice of the user which is then used to calculate lip movement on basis of an amplitude based algorithm. Using the data provided by these sensors, the user can interact with the virtual character on screen, for example they can walk around him or talk to him. This project requires a user to put certain equipment on specifically to interact with the system and is therefore not a pure natural interaction system. This system has been used successfully at a marketing presentation for touristic content (FITUR ‘11). This is especially interesting as the project described in the documentis also planned to be used at marketing presentations and trade shows.
One of the most important websites that shows off actual projects done with Kinect is called Kinect Hacks (Kinect Hacks, 2012). It offers videos of countless different projects and even has a Top 10 of the current best projects.
One of these projects is currently in use at the World of Coca-Cola museum in Atlanta, GA (Kinect Hacks - Protecting The Secret, 2012). It allows tracking of several users at once who can then playfully use gestures to interact with all kinds of objects on screen to find new information about the company and their products. In one of the games the users can play they need to find the combination to a vault containing Coca-Cola’s secret. This can be observed in the video on the website from about timestamp 1:18. At about 1:24, users can be observed making large gestures...
trying to turn the knobs of the combination lock. They instantly understand what to do without the need of any sort of explanation from the system. Understanding the system without the need for explanation is a very important concept and something that needs to be incorporated into the current project as well.
Another very impressive project that combines augmented reality and gestures into what is called a “god game” is called “Kinect-powered Augmented Reality Sandbox”. With this system, a user can play in a sandbox while the system projects contours onto the sand in real time. With a certain gesture, virtual water can then be poured into the sandbox which will react to the structure of the sand in the sandbox (Kinect Hacks - Augmented Reality Sandbox, 2012). Interesting about this project is the seamless connection between the real sandbox, the virtual contours and water, and the gestures to combine it all.
The project KinectJS (Kalogiros, 2012) attempts to combine JavaScript and Kinect to interact with a website, with the ultimate goal of bring motion controls into HTML5. The website shows a couple of videos of a user interacting with a website and playing games on it. The gestures are simple and often require large movements. Unfortunately the general user interface is not tailored to natural interaction but is in fact strongly inspired by Microsoft’s Metro UI which will be introduced in Windows 8. Instead of using natural gestures, the user still needs to click on items, which in this specific implementation seems to be done through hovering over a button for a certain amount of time. This interrupts work flow and can also potentially be tiring for the user due to the requirement of holding his arm still in an unusual position. For the current project, KinectJS essentially shows how not to do it, as they attempt to control a standard user interface with gestures instead of developing a user interface tailored to the needs of gestural controls.
A project of a different kind is “Cauldren – Gestural Performance Piece” (Kinect Hacks - Cauldren, 2012), where all gestures and movement captured by the Kinect are used to create an audio-visual experience. All movement data is converted into MIDI, which is then played as sound. This means that fast movement makes for different sounds than slow movement for example. This is accompanied by stylized fireworks that follow the user’s hand movements and change in intensity depending on speed amongst other things. Again, the video accompanying the description explains the usage very nicely. This system incorporates natural interaction very nicely, as any movement by the user is interpreted by the system and converted into a certain output. While this type of interaction does not seem suitable for interacting with specific objects on a screen, it uses the capabilities of both the Kinect and natural interaction to its full advantage.
4. Design Phase
4.1 Concept
Currently, whenever Imagination Studios have a booth at a trade show, they have a very basic setup with some chairs and a TV screen looping their show reel. When all personnel at the booth are in a meeting with a client, potentially interested customers have no way of leaving their contact information and will simply walk past the booth.
Initial design ideas for the project were a general content management system, but when being confronted with the aforementioned problem, the decision was made to adjust the first design to be
used as presentation software instead. As Imagination Studios are a motion capturing and animation company, using Kinect to navigate the software seems a logical choice as this incorporates their main business area into the presentation software, immediately giving anybody using the system a feel for what the company is about. It also makes using the system feel more like a game, adding fun to it.
The basic design concept is inspired by life itself. Pieces of information are considered cells, and they can be grouped into topics, called life forms, which means that all life forms consist of 1 or more cells. A vital thought behind the design was to avoid simply adjusting a standard user interface to work with Kinect but instead design a user interface specifically geared towards functioning best with motion capturing.
4.2 Design process
Initially, several designs were thought out and sketched on paper using only a pencil. This first phase was important, as sometimes ideas that seemed good at first turned out to be poorly thought out or simply impossible to do when brought to paper. If an idea showed potential, a more detailed design was created, drawing a complete storyboard showing the full extent of the interaction. These storyboards were then used to discuss the design ideas with supervisors and test persons. Two of the later rejected storyboards are discussed in more detail later in this chapter.
The storyboards were shown to several test persons asking for their opinion on which design they considered to be best. Most of them preferred the design using life forms to hold information. During discussions with the project supervisors, the same design was also preferred.
After the decision was made to use the design based on life forms, several video sketches were created. While storyboards are very simple pencil drawings, video sketches are made up of short video clips, photos, both edited and unedited to show the interaction in a more realistic environment. A sequence of two screenshots from a video sketch are shown below to illustrate.
 
The last step during the design phase was to create the actual visual design of the system. Storyboard and sketches would only give a very rough impression, and the video sketch was mainly used to explain the final interaction in great detail.
The visual design was done in Photoshop using several images that were then edited and put together to form the final design. Several of these images will be shown throughout this chapter when explaining different parts of the design in more detail.
4.3 Gestural Control
The system is designed to be controlled by capturing the body motions of the user by using a Kinect camera. The gestures are strongly inspired by gestures used for touch screens, mainly “Tap”, “Pinch” and “Spread”. This can be done according to Dan Saffer on page 45 (Saffer, 2009) where he writes:
“Patterns for touchscreens can also, in many cases, be applied to free-form interactive gestures, just slightly modified. For instance, the Spin to Scroll pattern can be performed with a finger on a device, but it could also be performed in the air with a finger”
During the design, it was also very important to keep the gestures as simple as possible, as the more complicated it gets, the fewer people will be able to perform these gestures. This is especially important for systems like for example a public kiosk that people rarely use as mentioned by Dan Saffer on page 38 (Saffer, 2009).
To access the cells, or information, within each life form, the user touches the cell and pulls it apart, increasing the size of the life form itself to fill the whole screen and with that making the cells within visible. This is a variant of Dan Saffer’s “Spread to Enlarge” gesture found on page 64 (Saffer, 2009), where both arms are used instead of two fingers. Because the gestural system in place is three dimensional instead of two dimensional like a touch screen, an extra gesture to indicate that the user wishes to interact with the object is required, as failing to do so would randomly activate objects on screen. The gesture used here is a variant of Saffer’s “Tap to Select” gesture found on page 48 (Saffer, 2009). To access the specific topic a certain cell covers, the user uses the same gesture as before. This creates consistency within the system and makes it easier to use. If the user lets go of the life form before opening it fully, for example when he suddenly decides he wants to look at something else instead, the life form will shrink back to its initial size on its own.
To make sure the user doesn’t get confused when controlling the system, two markers on screen will indicate the position of each of the user’s hands. The controls are kept very simple, as the design is explorative, meaning the user should be able to find out on his own how to use the system and enjoy himself while doing so.
To stop working with the information the user is currently accessing, it can be pushed together slightly, and it will automatically shrink again, in the same way it does when a user lets go of it before it is fully opened. This is a variant of the “Pinch to Shrink” gesture described by Saffer on page 64 (Saffer, 2009). Alternatively, the user can simply pull another cell visible on screen towards them, closing the current one automatically and opening up the new one. When having one cell open, the other cells in same life form will float next to it, allowing the user quick access to them. However, when a life form has been opened, the others will not show on screen to avoid information overload and confusing the user.
As these gestures are quite standard and are used in other software already, an alternative gesture system was developed as well. In the latest design, the life forms and cells are represented as soap bubbles. To access information within a soap bubble, the user would simply look at it and take a step...
towards it. When no longer interested in the information, they would then simply look away and take a step back, letting the bubble shrink back to its original size. Stepping forwards and backwards is a gesture described by Dan Saffer as well, found on page 74 and called “Move Body to Activate” (Saffer, 2009). Using the user’s viewing direction as direct input is not a gesture that can be found yet, however it could be hypothesized that it is essentially a heavily modified version of Saffer’s “Point to Select/Activate” gesture found on page 76 (Saffer, 2009).
4.4 Description of Designed Features
Following below is a description of all features designed during the design phase. During the implementation phase however, sometimes features were changed or discarded. This could happen for several reasons. For example, a better idea might have come up while working on the project, or certain ideas proved to be unsuitable when testing the prototype.
4.5 Idle system
Whenever no user is present, the system is in an idle state. While this is the case, it should try to capture the attention of anybody who happens to walk past the booth, get them to stop and watch. The life forms will slowly float around the screen, changing both their positions and the way they look. They may change both shape and color. The picture below shows the initial design for the idle state. The life forms all have different shapes that will keep changing. They also have a caption, clearly showing what information can be found inside them. The Imagination Studios logo is always present, on any screen to make sure the name of the company cannot be missed.

During implementation it was noticed that the design above was not very appealing and should be made much more interesting. Several changes were made, all of which can be seen in the screenshot below.
Figure 4: Idle screen in the actual software
Instead of always showing the full text of the content, only the first letter is displayed. The full text is then only shown when a user moves a hand indicator to move over one of the bubbles. A black background does not seem inviting, and therefore an animated night sky with moving stars and clouds was implemented. This was a suggestion made by an Imagination Studios employee who was testing the system.
4.6 User detection
When a user steps into the Kinect control zone, the system greets the user. This is used as an acknowledgement for the user so he knows that the system sees him now and is ready to get input. The life forms will stop moving now to facilitate interaction with the system and to prevent the user having to chase after the information he is interested in.
4.7 Data Access
When the user starts to access the information he is interested in, the system will keep track of what he has accessed. This data will later be used if the user requests more detailed information or wishes to be contacted.
The life form the user accessed will grow to fill up most of the screen, showing the cells it contains. It will not change its shape from round to rectangular however. This is to keep the design consistent and by that making sure that the user always knows where he is and won’t feel lost. The cells inside
float freely, also changing their form and color randomly. Each cell has a caption, showing what specific topic it contains information about.
Figure 5: Cells floating inside a life form
Accessing a specific cell makes it grow larger. The other cells will still be visible next to it however, to make accessing them easy. A new cell labeled “Detailed Information” will appear as well, enabling the user to request more detailed information, or leave his contact details. Cells will only show basic information, a picture and a headline is all that is needed as nobody is interested in reading much on a screen at a trade show.
During implementation, several changes were made here. The user’s moves are no longer tracked as that information is not actually needed. Also, whenever a cell is being accessed, the others cells are faded out instead, allowing the user to focus on the item he is currently interested in. The other cells fade back in when the user indicates he is done with the current cell.
4.8 Contact Information
If a user wants to know more about the topics he has visited so far, he can access the detailed information cell, which will show him a QR-code giving him a link showing more information on everything he has accessed so far. It is also possible to scan a business card, which will then allow IMS to contact the person later. When a business card is scanned, the history of the current user is saved with it to make it easier for the sales department to contact that customer with specific information.
Another way for the user to leave his contact details is to give him an email address using a QR-code as well, to which he could then simply send his vCard. This would not save the history of the current user however.
During implementation the idea for the business card scanner and the personalized link were scrapped in the “Detailed Info” cell were scrapped. Instead, a cell called “Contact” was introduced on the main page, allowing the user to scan the QR-code inside and send his vCard to that email address. This was done to keep everything as simply as possible.
4.9 Other Developed Ideas
4.9.1 A city of data
The first idea was to organize all the data as a city, where the buildings would represent organizational units, and the inhabitants specific data. The user would be able to walk through the city and interact with any inhabitant he encounters and in that way accessing whatever data this inhabitant was representing. This idea was rejected because while it would be a nice interface when simply wanting to browse random data, it would actually be very hard to implement a quick and easy direct-access system.
4.9.2 The Data Tree
Another idea that was developed during the design process was based on a tree carrying fruit. The tree would have several branches, each branch representing an organizational unit. Each of these branches would carry fruits, which would represent the specific data. A user would be able to both add data to and remove data from the tree. If data was added that did not fit into any of the currently available organizational units, the tree would then grow another branch, allowing the data to be put on it. This type of interface would work very well with a gesture based system and was generally liked when shown to testers. However, the design is more suited towards a general content managing system whereas the project itself was set up as presentation software, which does not require users to be able to add data to the tree.
5. Hardware / Used Tools
5.1 Kinect Hardware
The Kinect Camera was released on November 4, 2010 in North America and November 10, 2010 in Europe. It is a device developed by Microsoft for their Xbox 360 console to allow user input without the need of a controller. It was a huge success, selling more than 8 million units in the first 60 days, giving it the Guinness World Record of being the “fastest selling consumer electronics device”. It can both track gestures and recognize spoken commands through the use of a microphone.
The ranged camera technology was developed by the Israeli company PrimeSense. It uses an infrared projector and camera to track movement in 3 dimensions. The infrared laser projects a grid onto the environment which is then used to calculate distances. The image below shows an infrared image of the laser grid. This technology was invented in 2005 by Zeev Zalevsky, Alexander Shpunt, Aviad Maizels and Javier Garcia.

The hardware itself captures videos at a rate of 30 frames per second at a resolution of 640 x 480 pixels. The RGB video stream uses 8 bit, whereas the monochrome depth sensor uses 11 bit depth, allowing for a total of 2048 levels of sensitivity. The picture below explains in detail where which part of the Kinect is located.
5.2 Kinect SDK
The Kinect SDKs make it possible to access various functions of the Kinect hardware within a programming project. Often they also include pre-made functions, allowing the users to use functions that would otherwise potentially require extensive development time. There are several SDKs available for Kinect and during the course of the project a decision had to be made on which one to use. OpenNI, openKinect and the official Microsoft Kinect SDK were considered during this process.
5.2.1 Licensing
As the result of this project could potentially be used as a basis for a commercial program, the type of license each SDK has needed to be considered.
OpenNI comes with the LGPL, or GNU Lesser General Public License. This means that the source code for this SDK is freely available but unlike with the standard GPL, or GNU General Public License, it is not required that applications developed with the SDK need to be open source as well.
The Microsoft Kinect SDK was still in beta stage when the research was conducted. The beta license prohibited any kind of commercial usage and reserved the right for Microsoft to use any application developed with the SDK for advertising etc. free of charge. The cost for a commercial license after release of the SDK was still unknown during the decision phase.
openKinect is released under the Apache20 or GPL license, requiring any application developed with it to also be open source.
5.2.2 Features
OpenNI and Microsoft SDK have very similar features, however OpenNI requires a short calibration phase before a user can use the system, whereas the Microsoft SDK is capable of tracking a user without the need for calibration.
OpenKinect does not have a skeleton tracking feature included.
5.2.3 Decision
OpenNI was finally chosen for this project. The Microsoft SDK was rejected due to uncertainty with future licensing and costs. OpenKinect was rejected for several reasons. Licensing was an important reason, as well as its lack of a skeleton tracking feature. Another reason for rejecting openKinect was its focus on the Linux platform which means that it exhibits instability and performance issues on Windows platforms, which were to be the development platform for this project.
5.3 Game Engine Comparison
There are a great many options to use to actually draw the content on screen. DirectX or OpenGL could be used, requiring a graphics engine to be written from scratch. This in itself would be very time consuming and would more than fill a bachelor thesis all on its own.
Another option is to use one of the many game engines available. Possibilities here include Unreal Development Kit (UDK), Cryengine 3, Unity3D, Ogre3D etc. The first three mentioned game engines are looked at in more detail here. Ogre3D does not have content editors like the other three, so it was omitted from the decision process early.
5.3.1 Cryengine 3
The Cryengine 3 SDK builds upon the Cryengine 3 by Crytek and has in previous versions been used for games such as Far Cry, Crysis and Crysis 2. It is mainly geared towards being used in first-person shooter video games. Crytek offer a large community with special forums geared towards game development with their engine. This offers a large knowledge base and contact with many experienced users to help with any problems that may arise during development.
5.3.1.1 Main use
This engine is specifically geared towards making landscapes; even very large outdoor maps can be created using the editor. This means that whenever a new map is created, the editor assumes that every new level requires terrain, water and sky, neither of which can actually be removed, only edited. This limits the usage possibilities of this engine for other types of games. For example, games set in a space environment or indoors are less suited to this engine.
Any programming done when creating a project with this engine is done in C++; however Lua and XML are also available. A physics engine is included as well, allowing the user to create physical effects within their maps. It is proprietary however, meaning it does not utilize NVidia’s PhysX and therefore does not support hardware accelerated physics.
5.3.1.2 Always online requirement
To use the SDK, an account with Crymod is needed. The user must be logged in at all times, which means that an Internet outage at the user’s end or a service interruption at Crymod’s servers mean that the user cannot work with the SDK. Even though permanent internet access is common nowadays, interruptions can still occur resulting in loss of work time.
5.3.1.3 Licensing
The SDK is free to download and use, giving everybody a chance to evaluate it properly without the risk of potential financial loss if the engine ends up being the wrong choice for the project in question. For educational use and non-commercial projects no additional license is required and it is absolutely free to use for this purpose.
Independent developers that wish to sell their products need to apply for a special Independent Developers License. This is a royalty-only license, allowing these developers to work on their projects without needing initial funding to pay for the license. When the game is on sale, Crytek then requires 20% of the developer’s revenues from it.
For normal commercial applications Crytek needs to be contacted for standard licensing terms. These terms are not publically available.
5.3.1.4 Kinect
There is no support for Kinect at the time of writing, and there are no third-party plugins available either, meaning a connection between Kinect and Cryengine 3 would have to be developed from scratch.
5.3.2 Unreal Engine 3
Unreal Engine 3 is developed by Epic Games, and the full game engine package is offered to users in form of the so-called Unreal Development Kit, or UDK with updates being released once a month. It is mainly developed for first-person shooter games, but has been used for other game types as well, including for example MMORPGs. UDK offers support for many platforms including DirectX 9, DirectX 11, Android etc. UDK was first released in November 2009 and has since then built up a massive community. There are many websites available offering almost any kind of thinkable tutorial and there are professionally made video tutorial series on a wide range of topics available on YouTube as well. An extensive wiki explaining all functions is offered too. UDK presumably has the most extensive collection of freely available knowledge available at the time of writing.
Even though UDK gets updated once a month, there are often incompatibilities between different versions when Epic change features or fix certain bugs. Because of this it is highly discouraged to update to a newer version of UDK within the same project. Epic Games is aware of this and all UDK versions since release in November 2009 are still available for download.
UDK is designed to only have one project per installation, as all user-generated content goes into one folder. However, it is perfectly possible to have several UDK installations on one machine to work on several projects simultaneously. The engine comes with a good amount of samples, allowing new users to examine them to learn some basic techniques. Unlike Cryengine 3, it is possible to have empty space in a map to generate a location for space ships to battle, for example.
With the standard license, the user does not have access to the actual source code of the engine written in C++. UDK does offer a special script language called UnrealScript however, which is fairly powerful. Included with the language come a large collection of pre-made classes a user can inherit from to make their own classes.
5.3.2.1 Licensing
UDK is free to use for educational use. When used commercially, a 99 US$ fee needs to be paid, with extra royalties required after the first 50000 US$ in revenue. If used internally in a company, a flat fee of 2500 US$ per year is to be paid.
5.3.2.2 Kinect
Unreal Engine 3 does not have inbuilt support for Kinect, there is however a third-party add-on called NIUI which gives access to the coordinates of all joints of the currently tracked user. It is still beta software and therefore only offers basic functionality.
5.3.3 Unity3D
Unity3D is developed by Unity Technologies. As with the other two Game Engines, the SDK is free to download. It is not aimed at a specific type of game, but can be used for anything from browser-based MMOGs to Roleplaying games. It has a very strong community but does not have quite the same development power behind as the other two game engines have. Often, third-party plugins are required to achieve certain results however, and these are quite often still in beta status and need to be purchased. Programming can be done in JavaScript, C# or Boo. The whole SDK is based on Mono, which is an open-source implementation of the .Net framework developed by Microsoft. Physics are implemented using the PhysX libraries developed by nVidia, allowing the required calculations to be done in hardware on GPUs developed by the same company and by that significantly improving performance over calculations done on a CPU instead. As an example, the website benchmarkreviews.com has run several tests with the game Mafia II where they compare performance using both GPU and CPU calculated PhysX. It becomes very clear that GPU PhysX is vastly superior (Coles, 2010).

*Figure 10: Comparison of GPU vs. CPU PhysX*
5.3.3.1 Licensing
Unity offers different licenses where their standard license, simply called Unity, even allows development of games to be sold without any kind of royalties to be paid. However, even this license has limitations, mainly within functionality, but it may also not be used by companies that have had a turnover in excess of US$100000 in their last fiscal year. There are special licenses for Android and iOS as well. Their commercial license is called Unity Pro and offers all possible functionality, apart from the tools for mobile operating systems which need to be purchased extra. Unity Pro comes at a flat fee of US$1500 without any royalties.
5.3.3.2 Kinect
Unity3D already has a good integration with OpenNI allowing the user to start developing Kinect applications with Unity3D quickly and easily.
5.3.4 Decision
Taking everything into account, using Unity3D would have been the preferred choice to develop this prototype. Especially the most advanced implementation of a Kinect interface would have been extremely beneficial to the project. However, as Imagination Studios has good contacts with the developers of UDK the decision was made to use the Unreal Development Kit instead of Unity3D.
5.4 Tools
5.4.1 nFringe
nFringe is an add-on for Visual Studio 2010 developed by a company called Pixel Mine Games. It integrates seamlessly with Visual Studio and allows the user to develop applications in Unrealscript within this development environment.
nFringe offers basic Intellisense support, giving the user suggestions on what he might need next while he is typing. This support is useful, but unfortunately far from perfect. Not all possible options are always detected which can be confusing especially when the actually needed option is not showing up in the list of suggestions. Certain mistakes made during typing may also generate an error message which is disruptive, but does not hinder the function of the addon itself.
nFringe also offers syntax highlighting, making it much easier to read source code and detect obvious mistakes.
The most important function of nFringe however is the possibility to use the powerful debugger included in Visual Studio for running Unrealscript projects. While Intellisense and syntax highlighting can also be configured to work with a simple text editor like Notepad++ (Ho, 2011) even without the need for nFringe, actual runtime debugging is not possible with the tools UDK offers. Being able to set breakpoints and watch variables change values during runtime is often invaluable while trying to fix slightly more complicated bugs and errors.
5.4.2 Visual Studio
Visual Studio is a development suite produced by Microsoft. It offers support for many languages and allows for easy management of very large development projects. It also offers a powerful debugger
allowing easy real-time debugging with watch lists and breakpoints. One of the key features of Visual Studio is its Intellisense which aids the programmer while writing source code by offering suggestions on possible keywords and also showing what parameters and datatypes are used when calling a specific function or method. These features greatly increase productivity.
For this project, Visual Studio was chosen due to the possibility to write UnrealScript code with full Intellisense support with it with help of the aforementioned nFringe add-on.
5.4.3 Adobe Photoshop
Photoshop by Adobe is considered to be the standard program for image manipulation and editing. It has many very powerful features making possible to edit images in such a way that it is not possible to notice any editing. During this project, Photoshop was used to create the collages that show the final design.
5.5 Special techniques used during development
5.5.1 Gesture Recognition
Recognizing gestures is a very complicated process. OpenNI, which is the Kinect SDK that is being used, has inbuilt gesture recognition. However, NIUI, the beta software used to connect UDK and Kinect does not offer access to these functions, which means that a gesture recognition system had to be implemented from scratch. Gestures have many different components that need to be considered.
- Spatial component: Each gesture has several key locations within a defined space. With Kinect, each of these key locations has 3 dimensions.
- Temporal component: A gesture is not instant, but rather happens over a certain period of time.
- Absolute vs. relative movement: It is not enough to simply compare the coordinates of a certain joint at different intervals to detect a gesture. For example, an arm can be moved in different ways. One way would be to use an arm to push the hand forward and pull it backward. Another way would be to keep the arm steady, but move the torso forward, or even keep the torso straight and simply take a step forward. All these movements would result in the same change of coordinates for the hands, but they might mean completely different things.
NIUI implemented a function that allows to read the current coordinates of each joint at any time. It also automatically corrects the aforementioned problem of absolute vs. relative movement. This is done by assuming that only movements that are in relation to the rest of the body are important and by ignoring movement of the whole body. If applied to the example above, when taking a step forward the coordinates for the hand do not change, when moving just the hand however, coordinates do change. This behavior, while convenient for normal gestures, prevented implementation of a gesture that required the user to take a step towards the screen, as this simply would not be recognized.
There are a couple of key points that are important for the implementation of gestures.
• A certain threshold for the movement amount needs to be defined, as it is impossible to hold a hand completely still. Kinect is very accurate at detecting movements, so even slight shifts would immediately trigger a gesture, which is not a desired effect.
• This movement also needs to happen within a certain timeframe because otherwise random movement could trigger a gesture after several seconds, potentially activating a function that the user had no intention of using.
• If movement stops, the gesture must be reset after a certain period of time as the system would never go back to a state where no gesture is recognized.
Biswas and Basu (Biswas & Basu, 2011) describe a very robust and easily adaptable system for gesture recognition using a multi class Support Vector Machine and histograms to determine the differences between two frames and with this method detect any gesture that the system has been trained for. However, due to limitations in both UDK and the NIUI SDK, it was not possible to implement this method. Lai, Konrad and Ishwar also confirm that a method based on machine learning would be more flexible and robust. They write:
“We believe that gesture recognition based on machine learning algorithms with suitably selected features and representations are likely to be more flexible and robust” (Lai, et al., 2012)
The currently implemented method uses hard coded thresholds and functions to determine gestures. While this method is robust and functional, it is also very difficult to add new gestures, as each new gesture would need to be coded and then tested to find the correct threshold, making this a time consuming task. This is also confirmed by Lai, Konrad and Ishwar when they write:
“..., using a fixed set of parameters (thresholds) makes the insertion of a new action/gesture difficult” (Lai, et al., 2012)
5.5.1.1 Push and Pull Gestures
These two gestures were the first to be implemented in the system. They both work in essentially the same way and are distinguished by their opposite movement direction, so only the push implementation will be discussed.
Push is essentially movement of the hand towards the screen. To detect if the user is actually performing a gesture as opposed to simply moving randomly, a history of the last hand coordinates is kept. This history is checked at every tick of the game engine, and if the history shows a large enough difference in distance, it is assumed that the user wants to perform the push gesture.
5.5.1.2 Pinch and Spread Gestures
These gestures are implemented in a very simple way. During every tick of the game engine, the distance between both hand indicators is calculated. As with the push and pull gestures, a history of previous distances is kept. If the distance between both hand indicators is growing, a spread gesture is assumed, whereas a shrinking distance is assumed to be a pinch gesture.
5.5.2 Detecting if hand indicators are hovering over a bubble.
One of the most important functions of the interface is detecting if the hand indicators representing the user’s hands are hovering over one of the bubbles currently showing on screen. This will often
indicate that the user wants to interact with that bubble in some way, meaning this detection needs to work quick and very exact. Any misdetection happening here would result in unexpected behavior and confuse the user.
The first problem presenting itself is the fact that the whole system is based in a three-dimensional room. This means the bubbles are spheres placed at certain coordinates, and the hand indicators are planes that are positioned in front of all other objects within this room to prevent them from ever being hidden behind an object, as that would be confusing for the user. What is seen on screen is in fact the image captured by a virtual camera also positioned in the world at a specific set of coordinates. An issue concerning perspective comes into play here however. If a bubble is not exactly centered on screen, a hand indicator may appear to not hover over a bubble while a comparison of their coordinates in fact indicates that it is hovering.

This problem is illustrated in the picture above. The medium sized dot at the bottom represents the camera, the large dot at the top is one of the bubbles on screen and the small dot in the middle is one of the hand indicators. The long angled lines to the left and right represent the Field-of-View of the camera. The line in between represents the view vector from the camera towards the hand indicator. As can be seen, the vector misses the actual bubble, creating the illusion that the hand indicator is to the right of the bubble, even though the top-down view reveals that it is actually right in front of it.
To alleviate this problem, the 3D coordinates of both the bubble and the hand indicator need to be projected to two-dimensional screen coordinates. Because perspective can make objects appear smaller than they actually are, the radius of the projected bubble also needs to be calculated. When all these calculations have been done, the distance of the projected center of the bubble and the projected location of the hand indicator can be calculated. This distance is then compared to the size of the projected radius. If the distance is smaller this means the hand indicator is in fact hovering over this specific bubble. If the distance is larger, the hand indicator is not hovering.
This basic algorithm needs to be extended however, as the nature of the system means that this algorithm may sometimes return ambiguous and unpredictable results. A bubble can contain other
bubbles, and when hovering over a bubble that is within another bubble this algorithm would return a positive result for both of them. So a function that decides which of these results is the required one is needed.
The flowchart below explains the implemented algorithm in more detail. First, the coordinates of both hand indicators are projected to 2D screen coordinates. Then, a list containing all current bubbles is checked to determine if a hover test needs to be done on them. If there are, the algorithm described above is applied to check if one or both of the hand indicators are currently hovering over this bubble. If a hover is detected, another check is then applied to determine if this bubble contains other bubbles. If so, these bubbles are then added to the end of the list of bubbles to check for hovering. As these internal bubbles are not always on screen, they are only temporarily added to the list and this only when they belong to a bubble that is currently at maximum zoom level and actually showing its contents. If it is then decided that no hand indicator is actually hovering over these bubbles, they are removed from the list again.
Figure 12: General hover detection algorithm
Start
Project 3D hand coordinates to screen coordinates
Are there bubbles left to check?
Get the next bubble
Project bubble coordinates
Calculate distance between hand indicator and bubble
Is the distance smaller than bubble radius?
Add internal bubbles to list
Does bubble have internal bubbles?
Hand indicator is hovering over bubble
End
Remove bubble from list of bubbles
Is it an internal bubble?
5.5.3 Using opacity maps for hover effects
Whenever a bubble is not used, it only shows a single letter to indicate what type of content it contains, as shown in the picture. To prevent a user from having to guess what it means, a function has been implemented that adds the full title of that bubble whenever a hand indicator hovers over this bubble, as shown in the second picture.
Figure 13: Bubble without hover effect
Figure 14: Bubble with hover effect
To achieve this effect, an opacity map is used which defines which parts of a texture are transparent, and which parts are not. Both the opacity map and the texture used to display the text are shown here. The black background of the picture is not part of the original texture, it is usually transparent. The background has been added to allow presentation within this document.
Figure 15: Texture holding all text
Figure 16: Opacity Map
Opacity maps define what parts of a texture should be opaque, usually represented by the color white, and which parts should be transparent, usually represented by black. Looking at both textures above it can easily be seen that adding the opacity map to the text texture would result in making the lower part containing the word “Portfolio” transparent. So by dynamically adding and removing the opacity map, the desired hover effect can be achieved.
This picture above shows the complete shader program created within UDKs material editor. Important to know is that RGBA values in UDK have a range between (0.0, 0.0, 0.0, 0.0) and (1.0, 1.0, 1.0, 1.0). Multiplying a texture with a black texture will therefore always generate a black texture, and multiplying a texture with a white texture will not change the initial texture at all.
The top texture sample that is appearing white contains the texture that holds all the text. Its RGB channels, represented by the black box, are connected to the “Diffuse” channel of the material. Its transparency channel is multiplied with the opacity map which then generates the final opacity map that is connected to the “OpacityMask” channel of the material. The original opacity mask is altered depending on the value of the parameter “Hide_Text”. Whenever its value is the same or larger than the parameter “Compare_Value”, the normal RGB channels of the texture are multiplied with the transparency channel, making the lower part black, or transparent, in the process and hiding that part of the text. When the parameter value is lower however, the transparency channel of the opacity map is used instead. Because this texture is completely opaque, this means that its transparency channel is actually completely white. If this texture is then multiplied with the transparency channel of the other texture, no change is made at all and the whole texture including the previously hidden text is shown.
5.5.4 Bubbles and Animations
During the initial implementation phase, the bubbles on screen were created by using a standard static mesh provided by UDK. This made for some problems later on, as static meshes cannot be animated properly because they are, as their name suggests, static. As zoom levels of static meshes can be altered in all three dimensions, animating the static bubbles in this way was attempted first,
but this proved to be an unreliable and ugly implementation. Animation was desired however, as the bubbles are inspired by soap bubbles and having them animated would look much more lifelike and interesting.
Instead, Imagination Studios supplied an animated bubble combined with a skeleton modeled in Autodesk Maya 2011. Epic games provide plugins for both Autodesk Maya and Autodesk 3D Studio MAX to help export models and animations into a file format that can then be imported into UDK. This plugin is called ActorX and can be downloaded free of charge. The exported animation and skeletal mesh were imported into UDK and combined there, making the bubbles much more interesting to look at.
6. Test and Evaluation
This section will be relatively short. This is due to the fact that the main focus of this project was to create a new type of user interface and the development of a conceptual prototype afterwards. Testing was therefore a low priority during this project.
During development and testing it was quickly noted that a gesture system like the one implemented, where a user needs to hold his arms and hands up is tiring very quickly and is only suitable for applications that are only used during a short period of time. To alleviate this problem, the gestural system was adjusted to allow the user to keep his hands and arms closer to their body, thus reducing the fatigue experienced by using gestures.
A second issue with the gestural recognition system was an unwanted detection of gestures when people would let their arms and hands drop to their sides while looking at the content of a cell. The system would often mistake the movement for a tap gesture, making everything on screen behave erratically. A temporary solution involving the tweaking of gestural parameters was implemented, but the end result was not satisfactory. Further development would need to look into this problem in more detail to find a proper solution.
Another issue was that it was not immediately obvious to people that they needed to tap a cell if they wanted to activate it. Even though this is essentially the same gesture required as on a touch screen, the different way of interacting nonetheless seemed to make this gesture less obvious. A solution for this problem has not been found and should be addressed during further development of the system.
7. Conclusion and Outlook
This chapter will contain a conclusion and an outlook on how the project could be improved upon.
In general, the project turned out satisfactory. The whole process from the first design sketches with pencil in paper up to seeing the finished product on screen and being able to interact with it was very fun and educational. Doing a proper design process has really helped the project along, allowing for a much clearer goal. This has always been helpful to keep focus during the actual development phase, and also making it more effective. Without the design process the design might have been changed
or completely redone in the middle of the development, which would have slowed down the process and potentially prevented the project from reaching the status it has now.
The research on both technical and legal issues of the project was very interesting as well, allowing a much deeper insight into the whole world of software, or more specifically game, development. Legal issues especially were never a concern before, so learning to read and understand license agreements was a completely new experience. The research into all technical issues offered a lot of new insight especially into tools and game engines.
Towards the end it became clear however that the simple approach to gesture recognition used in the project only offered basic functionality. User testing often exhibited unintended activation of certain gestures, confusing the user in the process. Only at this point did it become obvious that more research time for proper gestural recognition should have been allocated at the beginning of the project.
Further development should focus on two key issues, gestural recognition and easier content management.
The gestural recognition system would probably need to be re-implemented in a different way. A possible solution would be to implement a system using a support vector machine as done by Biswas and Basu (Biswas & Basu, 2011). Using statistics and pattern recognition, this would not only allow for a much more robust gesture recognition, but also for an easy way to add new gestures, as the current system requires extensive programming work to add new gestures.
Another issue with the project is adding new content. This problem results from using UDK, as it requires content to be added before runtime using the included editors. This can make fixing even small mistakes like typos into a larger process. A function that would allow for content to be loaded dynamically at runtime without the need for using the editor would make the software a lot more usable in day to day operations.
8. References
|
{"Source-Url": "http://www.diva-portal.org/smash/get/diva2:639336/FULLTEXT01.pdf", "len_cl100k_base": 12309, "olmocr-version": "0.1.49", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 64828, "total-output-tokens": 14971, "length": "2e13", "weborganizer": {"__label__adult": 0.0012969970703125, "__label__art_design": 0.0171356201171875, "__label__crime_law": 0.0008497238159179688, "__label__education_jobs": 0.0031108856201171875, "__label__entertainment": 0.0013380050659179688, "__label__fashion_beauty": 0.0008921623229980469, "__label__finance_business": 0.0008225440979003906, "__label__food_dining": 0.0009908676147460938, "__label__games": 0.0390625, "__label__hardware": 0.022613525390625, "__label__health": 0.0008931159973144531, "__label__history": 0.001354217529296875, "__label__home_hobbies": 0.0005784034729003906, "__label__industrial": 0.0014934539794921875, "__label__literature": 0.000965118408203125, "__label__politics": 0.00044083595275878906, "__label__religion": 0.0012464523315429688, "__label__science_tech": 0.134521484375, "__label__social_life": 0.00013709068298339844, "__label__software": 0.0257720947265625, "__label__software_dev": 0.7412109375, "__label__sports_fitness": 0.0012292861938476562, "__label__transportation": 0.0013799667358398438, "__label__travel": 0.0004596710205078125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67579, 0.03942]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67579, 0.43425]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67579, 0.93756]], "google_gemma-3-12b-it_contains_pii": [[0, 290, false], [290, 2261, null], [2261, 6434, null], [6434, 7122, null], [7122, 9109, null], [9109, 12247, null], [12247, 15418, null], [15418, 18909, null], [18909, 21350, null], [21350, 24983, null], [24983, 26689, null], [26689, 28266, null], [28266, 29272, null], [29272, 30372, null], [30372, 31781, null], [31781, 33109, null], [33109, 34800, null], [34800, 37703, null], [37703, 40832, null], [40832, 42638, null], [42638, 45480, null], [45480, 48407, null], [48407, 51582, null], [51582, 54102, null], [54102, 55267, null], [55267, 55726, null], [55726, 56187, null], [56187, 57081, null], [57081, 58998, null], [58998, 61987, null], [61987, 64395, null], [64395, 66065, null], [66065, 67579, null]], "google_gemma-3-12b-it_is_public_document": [[0, 290, true], [290, 2261, null], [2261, 6434, null], [6434, 7122, null], [7122, 9109, null], [9109, 12247, null], [12247, 15418, null], [15418, 18909, null], [18909, 21350, null], [21350, 24983, null], [24983, 26689, null], [26689, 28266, null], [28266, 29272, null], [29272, 30372, null], [30372, 31781, null], [31781, 33109, null], [33109, 34800, null], [34800, 37703, null], [37703, 40832, null], [40832, 42638, null], [42638, 45480, null], [45480, 48407, null], [48407, 51582, null], [51582, 54102, null], [54102, 55267, null], [55267, 55726, null], [55726, 56187, null], [56187, 57081, null], [57081, 58998, null], [58998, 61987, null], [61987, 64395, null], [64395, 66065, null], [66065, 67579, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67579, null]], "pdf_page_numbers": [[0, 290, 1], [290, 2261, 2], [2261, 6434, 3], [6434, 7122, 4], [7122, 9109, 5], [9109, 12247, 6], [12247, 15418, 7], [15418, 18909, 8], [18909, 21350, 9], [21350, 24983, 10], [24983, 26689, 11], [26689, 28266, 12], [28266, 29272, 13], [29272, 30372, 14], [30372, 31781, 15], [31781, 33109, 16], [33109, 34800, 17], [34800, 37703, 18], [37703, 40832, 19], [40832, 42638, 20], [42638, 45480, 21], [45480, 48407, 22], [48407, 51582, 23], [51582, 54102, 24], [54102, 55267, 25], [55267, 55726, 26], [55726, 56187, 27], [56187, 57081, 28], [57081, 58998, 29], [58998, 61987, 30], [61987, 64395, 31], [64395, 66065, 32], [66065, 67579, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67579, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
6a1bcae585038d313168a02815378f5ff0f2eb53
|
A Service-Oriented Architecture enabling dynamic services grouping for optimizing distributed workflows execution
Tristan Glatard, Johan Montagnat, David Emsellem, Diane Lingrand
To cite this version:
HAL Id: hal-00459808
https://hal.archives-ouvertes.fr/hal-00459808
Submitted on 25 Feb 2010
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
A Service-Oriented Architecture enabling dynamic service grouping for optimizing distributed workflow execution
Tristan Glatard\textsuperscript{a,b,c,*} Johan Montagnat\textsuperscript{a,c} David Emsellem\textsuperscript{c} Diane Lingrand\textsuperscript{c,a}
\textsuperscript{a}I3S, CNRS, 2000 route des Lucioles, 06903 Sophia Antipolis, France
\textsuperscript{b}Asclepios, INRIA Sophia, 2004 route des Lucioles, 06902 Sophia Antipolis, France
\textsuperscript{c}University of Nice-Sophia Antipolis, 930 route des Colles, 06903 Sophia Antipolis, France
Abstract
In this paper, we describe a Service-Oriented Architecture allowing the optimization of the execution of service workflows. We discuss the advantages of the service-oriented approach with regards to the enactment of scientific applications on a grid infrastructure. Based on the development of a generic Web-Services wrapper, we show how the flexibility of our architecture enables dynamic service grouping for optimizing the application execution time. We demonstrate performance results on a real medical imaging application. On a production grid infrastructure, the optimization proposed introduces a significant speed-up (from 1.2 to 2.9) when compared to a traditional execution.
Key words: Grid workflows, Service Oriented Architecture, Legacy code wrapper, Service grouping
* Corresponding author
Email addresses: glatard@i3s.unice.fr (Tristan Glatard), johan@i3s.unice.fr (Johan Montagnat), emsellem@polytech.unice.fr (David Emsellem), lingrand@i3s.unice.fr (Diane Lingrand).
URLs: http://www.i3s.unice.fr/~glatard (Tristan Glatard), http://www.i3s.unice.fr/~johan (Johan Montagnat), http://www.i3s.unice.fr/~lingrand (Diane Lingrand).
Preprint submitted to Elsevier 3 July 2009
1 Introduction
Grid technologies are very promising for addressing the computing and storage needs arising from many scientific and industrial application areas. Grids have a potential for massive parallelism that can drastically improve applications execution time but, except in a few simple cases, it is often not straightforward to exploit for application developers. A tremendous amount of work has been put in the development of various sequential data processing algorithms without taking into account properties of distributed systems nor specific middlewares. Even considering new codes development, instrumenting applications with middleware specific interfaces or designing applications to explicitly take advantage of distributed grid resources is a significant burden for the developers, who are often reluctant to allocate sufficient effort on non application specific problems. Grid middlewares are therefore expected to ease as much as possible the migration of both legacy and new codes to a grid infrastructure by:
- proposing a non-intrusive interface to existing application code; and
- optimizing the execution of applications on grid resources.
The first point can be addressed by generic code wrappers which do not require code instrumentation for executing non-specific codes on a grid. In particular, they ease the reuse of legacy codes.
For a large range of scientific applications, the second point is addressed by workflow managers. Scientific data processing procedures often require to apply many data filtering, modeling, quantification and analysis procedures. Furthermore, large data sets often have to be processed. A workflow manager can describe the processing dependencies independently from the actual scientific codes involved. The associated workflow enactor can optimize the execution on a grid infrastructure by exploiting the data and code parallelisms intrinsically expressed in the workflow.
This paper deals with code migration on grid infrastructures. Our ultimate goal is to propose a generic system, able to gridify any legacy code efficiently. The two aspects highlighted above will be studied and solutions will be proposed.
Service-Oriented Architectures (SOA) have encountered a large success both in the Grid and in the Web communities. Most recent middlewares have adopted it in order to address interoperability and extensibility problems. Although SOAs have been widely adopted for middlewares design, and despite their known advantages, there are less frequently encountered in the design of scientific applications.
The advantages and drawbacks caused by service-based design are first discussed in section 2. A code wrapper that allows to benefit from a service-based approach at a very low development cost, even when considering legacy codes, is then described in section 3. An SOA workflow engine design is introduced in section 4 and optimization results measured through workflow executions on a grid by grouping sequential computing tasks, thus reducing overheads due to grid jobs submission, are shown in section 5. Experiments on a real application to medical images analysis are finally presented in section 6. They demonstrate that significant speeds-up can be achieved thanks to our grouping optimization.
2 Task-based and service-based applications
Two main paradigms are used in grid middlewares for describing and controlling application processings. The task-based approach is the most widely adopted and it has been exploited for the longest time. It consists in an exhaustive description of the command-line and the remote execution of the application code. The service-based approach has more recently emerged. It consists in using a standard invocation protocol for calling application code embedded in the service. It is usually completed by a service discovery and an interface description mechanism.
2.1 Task-based job submission
In the task-based job submission approach, each processing is related to an executable code and described through an individual computation task. A task description encompasses at least the executable code name and a command-line to be used for code invocation. It may be completed by additional parameters such as input and output files to be transferred prior or next to the execution, and additional task scheduling information such as minimum system requirements. Tasks may be described either directly on the job submission tool command-line, or indirectly through a task description file. Unless considering very simple code invocation use cases, description files are often needed to specify the task in details. Many file description formats have been proposed and the OGF\(^1\) unified different formats in the Job Submission Description Language (JSDL) \[1\]. The task-based approach is also often referred to as global computing.
In the task-based paradigm, code invocation is straight-forward, and does
---
\(^1\) OGF, Open Grid Forum, [http://www.gridforum.org/](http://www.gridforum.org/)
not require any adaptation of the user code and for this reason it has been implemented in most existing batch systems for decades (e.g. PBS [2], NQS [3], OAR [4]). Many grid middlewares, such as Globus Toolkit 2 [5], CONDOR [6] and gLite [7] are also task-based from the application code perspective. Indeed, even if those middlewares (in particular gLite) may themselves be designed as a set of interoperating services, the computing resources of the grid are accessed through task submissions.
2.2 Service-based code execution
The service-based approach was widely adopted for dealing with heterogeneous and distributed systems. In particular, for middleware development, the OGSA framework [8] and the subsequent WSRF standard encountered a wide adoption from the international community. In the service-based approach, the code is embedded in a standard service shell. The standard defines an interface and an invocation procedure. The Web-Services standard [9], supported by the W3C is the most widely available although many existing implementations do not conform to the whole standard yet. It has been criticized for the low efficiency resulting from using text messages in XML format and alternatives such as GridRPC [10] have been designed to speed-up message exchanges. The service-based approach is also often referred to as meta computing. Middlewares such as DIET [11], Ninf [12], Netsolve [13] or Globus Toolkit 4 [14] adopted this approach.
The main advantage of the service based approach is the flexibility that it offers. Clients can discover and invoke any service through standard interfaces without any prior knowledge on the code to be executed. The service-based approach delegates to the server side the actual code execution procedure. However, all application codes need to be instrumented with the service interface to become available. In the case of legacy code applications, it is often not the case and an intermediate code invocation layer or some code reworking is needed to exploit this paradigm. Users are often reluctant to invest efforts in writing specific code for services on the application side for different reasons:
- The complexity of standards makes service conformity a matter of specialists. Some tooling are available for helping in generating service interfaces but they cannot be fully automated and often require a developer intervention.
- Standards tend to evolve quickly, especially in the grid area, obsoleting earlier efforts in a too short time scale.
- Multiple standards exist and a same application code may need to be executed through different service interfaces.
- In the case of legacy code, recompilation for instrumenting the code may be
very difficult or even impossible (in case of non availability of source code, compilers, dependencies, etc).
Therefore, a user-friendly way to deal with legacy code is to propose a generic service-compliant code execution interface.
2.3 Discussion
Apart from the invocation procedures and the implementation difficulties mentioned above, the task-based and the service-based approaches differ by several fundamental points which impact their usage:
- To submit a task-based job, a user needs to precisely know the command-line format of the executable, taking into account all of its parameters. In the scientific community, it is not always the case when the user is not one of the developers. Conversely, in the service-based approach, the actual code invocation is delegated to the service which is responsible for the correct handling of the invocation parameters. The service is a black box from the user side and to some extent, it can deal with the correct parametrization of the code to be executed.
- The handling of input/output data is very different in both cases. In the task-based approach, input/output data have to be explicitly specified in the task description. Executing the same code on different data items requires the rewriting of a new task description. Services better decouple the computation and data handling parts. A service dynamically receives inputs as parameters. This decoupling between treatments and data is particularly important when considering the processing of complete data sets rather than single data, as it is commonly targeted on grid infrastructures.
- The service-based approach enables discovery mechanisms and dynamic invocation even for a priori unknown services. This provides a lot of flexibility both for the user (discovery of available data processing tools and their interface) and the middleware (automatic selection of services, alternative service discovery, fault tolerance, etc).
- In the service-based framework, the code reusability is also improved by the availability of a standard invocation interface. In particular, services are naturally well adapted to describe applications as complex workflows, chaining different processings whose outputs are piped to the inputs of each other.
- Services are adding an extra layer between the code invocation and the grid infrastructure on which jobs are submitted. The caller does not need to know anything about the underlying middleware that will be directly invoked internally by the service. Different services might even communicate with different middlewares and/or different grid infrastructures.
Yet, service deployment introduces an extra effort with regards to the task-based approach. Indeed, to enable the invocation, services first have to be installed on all the targeted resources, which becomes a challenging problem when their number increases.
The flexibility and dynamic nature of services described above is usually very appreciated from the user point of view. Given that application services can be deployed at a very low development cost, there are number of advantages in favor of this approach.
From the middleware developers point of view, the efficient execution of application services is more difficult though. As mentioned above, a service is an intermediate layer between the user and the grid middleware. Thus, the user does not know anything of the underlying infrastructure. Tuning the jobs submission for a specific application is more difficult. Services are completely independent from each other and global optimization strategies are thus hardly usable. Therefore, some precautions need to be taken when considering service based applications to ensure good performance.
2.4 Workflow of services
Building applications by assembling legacy codes for processing and analyzing data is very common. It allows code reusability without introducing a too high load on the application developers. The logic of such a composed application, referred to as the application workflow, is described through a set of computation tasks to perform and constraints on the order of processings such as data dependencies.
Many workflow representation formats and execution managers have been proposed in the literature with very different properties [15]. The emblematic task-based workflow manager is the CONDOR Directed Acyclic Graph Manager (DAGMan) [16], on top of which the Pegasus system is built [17]. Based on the static description of such a workflow, many different optimization strategies for the execution have been proposed [18]. The service-based approach has been implemented in different workflow managers such as the Kepler system [19], the Taverna workbench [20], Triana [21] and the MOTEUR enactor developed in our team [22], which aims at optimizing the execution of data intensive applications.
The main interest for using grid infrastructures is to exploit the potential application parallelism thanks to the availability of the grid resources. There are three different levels of parallelism that can be exploited in service-based workflows [32]. Service grouping strategies have to cautiously take care of them in order to avoid execution slow down.
Workflow parallelism. The intrinsic workflow parallelism depends on the application graph topology. For instance if we consider the application example presented in figure 7, services Baladin and Yasmina can be executed in parallel.
Data parallelism. Data segments are processed independently from each other. Therefore, different input data segments can be processed in parallel on different resources. This may lead to considerable performance improvements given the high level of parallelism achievable in many scientific applications.
Service parallelism. The processings of two different data sets by two different services are totally independent from each other. This pipelining model, very successfully exploited inside CPUs, can be adapted to sequential parts of service-based workflows. Considering the workflow represented on figure 7, services crestLines and crestMatch may be run in parallel on independent data sets. In practice this kind of parallelism strongly improves the workflow execution on production grids.
3 Generic Web-Service wrapper
3.1 Wrapping application codes
To ease the embedding of legacy or non-instrumented codes in the service-based framework, an application-independent job submission service is required. In this section, we briefly review systems that are used to wrap legacy code into services to be embedded in service-based workflows.
The Java Native Interface (JNI) has been widely adopted for the wrapping of legacy codes into services. Wrappers have been developed to automate this process. In [24], an automatic JNI-based wrapper of C code into Java and the corresponding type mapper with Triana [21] is presented: JACAW generates all the necessary Java and C files from a C header file and compiles them. A coupled tool, MEDLI, then maps the types of the obtained Java native method to Triana types, thus enabling the use of the legacy code into this workflow manager. Related to the ICENI workflow manager [25], the wrapper presented in [26] is based on code re-engineering. It identifies distinct components from a code analysis, wraps them using JNI and adds a specific CXML interface layer to be plugged into an ICENI workflow.
The WSPeer framework [27], interfaced with Triana, aims at easing the deployment of Web-Services by exposing many of them at a single endpoint. It differs from a container approach by giving to the application the control
over service invocation. The Soaplab system [28] is especially dedicated to
the wrapping of command-line tools into Web-Services. It has been largely
used to integrate bioinformatics executables in workflows with Taverna [20].
It is able to deploy a Web-Service in a container, starting from the descrip-
tion of a command-line tool. This command-line description, referred to as
the metadata of the analysis, is written for each application using the ACD
text format file and then converted into a corresponding XML format. Among
domain specific descriptions, the authors underline that such a command-line
description format must include (i) the description of the executable, (ii) the
names and types of the input data and parameters and (iii) the names and
types of the resulting output data. As described latter, the format we used
includes those features and adds new ones to cope with requirements of the
execution of legacy code on grids.
The GEMLCA environment [29] addresses the problem of exposing legacy
code command-line programs as Grid services. It is interfaced with the P-
GRADE portal workflow manager [30]. The command-line tool is described
with the LCID (Legacy Code Interface Description) format which contains (i)
a description of the executable, (ii) the name and binary file of the legacy code
to execute and (iii) the name, nature (input or output), order, mandatory, file
or command line, fixed and regular expressions to be used as input validation.
A GEMLCA service depends on a set of target resources where the code is
going to be executed. Architectures to provide resource brokering and service
migration at execution time are presented in [31].
Apart from this latest early work, all of the reviewed existing wrappers are
static: the legacy code wrapping is done offline, before the execution. This
is hardly compatible with our approach, which aims at optimizing the whole
application execution at run time. As we will see in section 5.2, our design is
exploiting a dynamic grouping strategy optimization that is enacted through
a service factory called at execution time by the MOTEUR workflow manager.
We thus developed a specific grid submission Web-Service, which can wrap an
executable at run time.
3.2 A generic Web-Service wrapper
We designed a generic application code wrapper compliant with the Web-
Services specification. It enables the execution of a legacy executable through
a standard service interface. This service is generic in the sense that it is
unique and it does not depend on the executable code to submit. It exposes
a standard interface that can be used by any Web-Service compliant client to
invoke the execution. It completely hides the grid infrastructure from the end
user as it takes care of the interaction with the grid middleware. This interface
plays the same role as the ACD and LCID files quoted in the previous section, except that it is interpreted at the execution time.
To accommodate to any executable, the generic service is taking two different inputs: a descriptor of the legacy executable command line format, and the input parameters and data of this executable. The production of the legacy code descriptor is the only extra work required from the application developer. It is a simple XML file which describes the legacy executable location, command line parameters, input and output data.
### 3.3 Legacy code descriptor
The command line description has to be complete enough to allow a dynamic composition of the command line from the list of parameters at the service invocation time and to access the executable and input data files. As a consequence, the executable descriptor contains:
1. The name and access method of the executable. In our current implementation, access methods can be a URL or a Grid File Name (GFN). The wrapper is responsible for fetching the data according to different access modes.
2. The access method and command-line option of the input data. As our approach is service-based, the actual name of the input data files is not mandatory in the description. Those values will be defined at the execution time. This feature differs from various job description languages used in the task-based middlewares. The command-line option allows the service to dynamically build the actual command-line at the execution time.
3. The command-line option of the input parameters: parameters are values of the command-line that are not files and therefore which do not have any access method.
4. The access method and command-line option of the output data. This information enables the service to register the output data in a suitable place after the execution. Here again, in a service-based approach, names of output data files cannot be statically determined because output file names are only generated at execution time.
5. The name and access method of the sandboxed files. Sandboxed files are external files such as dynamic libraries or scripts that may be needed for the execution although they do not appear on the command-line.
3.4 Example
An example of a legacy code description file is presented in figure 1. It corresponds to the description of the service crestLines of the workflow depicted in figure 7. It describes the script CrestLines.pl which is available from the server legacy.code.fr and takes 3 input arguments: 2 files (options -im1 and -im2 of the command-line) that are already registered on the grid as GFNs and 1 parameter (option -s of the command-line). It produces 2 files that will be registered on the grid. It also requires 3 sandboxed files that are available from the server (Convert8bits.pl, copy and cmatch).
The command-line description format presented here might have some limitations when applications that are not pure command-line are taken into consideration. For instance, some applications may require input from the stdin or even ask for a graphical interaction with the user. To cope with these limitations, our description format could easily be extended. Yet, it will be dependent on the ability of the grid middleware to handle such interactive jobs.
3.5 Discussion
This generic service highly simplifies application development because it is able to wrap any legacy code with a minimal effort. The application developer only needs to write the executable descriptor for her code to become service aware.
But its main advantage is in enabling the sequential service grouping optimization that will be described in section 5. Indeed, as the workflow enactor has access to the executable descriptors, it is able to dynamically create a virtual service, composing the command lines of the codes to be invoked, and submitting a single job corresponding to this sequence of command lines invocation.
It is important to notice that our solution remains compatible with the services standards. The workflow can still be executed by other enactors, as we did not introduce any new invocation method. Those enactors will make standard service calls (e.g. SOAP ones) to our generic wrapping service. However, the optimization strategy described in the next section is only applicable to services including the descriptor mentioned in section 3.3. We call those services MOTEUR services, referring to our workflow manager presented in section 2.4.
Fig. 1. Legacy code descriptor example for our service wrapper. The location of the executable is first described. Then, inputs and outputs participating in the command-line generation are specified. Finally, external dependencies (such as dynamic libraries) are described in the sandbox section.
4 Workflow manager SOA
The generic Web-Service wrapper introduced in section 3 drastically simplifies the embedding of legacy code into application services. However, it mixes two different roles:
- the legacy command line generation
- the grid submission.
Submission is only dependent on the target grid and not on the application service itself. In a SOA it is preferable to split these two roles into two independent services for several reasons. First, the code handling the job submission does not need to be replicated in all application services. Second, the sub-
mission role can be transparently and dynamically changed (to submit to a different infrastructure) or updated (to adapt to middleware evolutions).
Figure 2 illustrates the resulting SOA design through a simple workflow deployment example. The workflow manager orchestrates three different services $P_1$, $P_2$, and $P_3$. These are standard Web-Services: either legacy code wrapping services ($P_1$ and $P_2$, in blue) or any Web-Service ($P_3$) that the workflow manager can invoke. The services may submit jobs to a grid infrastructure. In particular, MOTEUR services are interfaced with a submission service (in red). There may exist various submission services corresponding to several grid infrastructures. Services may thus use different infrastructures and even dynamically change the submission target during the execution (e.g. taking into account the infrastructure load). In the next section, we will see how the flexibility of this SOA can be exploited to dynamically optimize the execution of an application workflow.
5 Service grouping optimization strategy
In this section, we propose a service grouping strategy to optimize the execution time of a workflow. Grouping services of a workflow may reduce the total overhead induced by the submission, scheduling, queuing and data transfers time because it reduces the number of submitted jobs required to run the application. This impact is particularly important on production infrastructures, where this overhead can be very high (several minutes) due to the large scale and multi-users nature of those platforms. Consider the simple workflow represented on the left side of figure 3. On top, services $P_1$ and $P_2$ are invoked independently. Data transfers are handled by each service and the connection between the output of $P_1$ and the input of $P_2$ is handled at the workflow engine level. On the bottom, $P_1$ and $P_2$ are grouped into a virtual single service. This service is capable of sequentially invoking the code embedded in both services, thus resolving the data transfer and independent code invocation issues.
Conversely, grouping services may also reduce the parallelism and we have to take care of the grouping strategy in order to avoid performance losses. In particular, grouping sequentially linked services is interesting because they do not benefit from any parallelism. Those groupings can be done at the services level, i.e they will be available for each data item processed by the workflow. For example, considering the workflow of our application presented on figure 7, services crestLines and crestMatch can be grouped without parallelism loss as well as services PFMatchICP and PFRegister.
From the middleware point of view, grouping strategies may also be interesting because it reduces the total number of jobs to handle, thus decreasing the global load imposed on the infrastructure. Yet, grouping services leads to the submission of longer jobs, which may also increase the average queuing time as a damaging side effect.
5.1 Grouping strategy
Service grouping can lead to significant speed-ups, especially on production grids that introduce high overheads, as it will be demonstrated in the next section. However, it may also slow down the execution by limiting one of the 3 levels of parallelism described in section 2.4. We thus have to determine efficient strategies to group services.
In order to determine a grouping strategy that does not introduce any slowdown, neither from the user point of view, nor from the infrastructure one, we impose the two following constraints:
Fig. 3. Classical services invocation (top) and service grouping (bottom).
- the grouping strategy must not limit any kind of parallelism (user point of view)
- during their execution, jobs cannot communicate with the workflow manager (infrastructure point of view).
The second constraint prevents a job from holding a resource just waiting for one of its ancestor to complete. An implication of this constraint is that if services A and B are grouped together, the results produced by A will only be available once B will have completed.
A workflow may include both MOTEUR Web-Services (i.e. services that are able to be grouped) and classical ones, that could not be grouped. Assuming those two constraints, we can prove the following rule:
Let $A$ be a MOTEUR service of the workflow and $\{B_0,...,B_n\}$ its children in the service graph. Grouping $B_i$ and $A$ does not lead to any parallelism loss IF and ONLY IF:
1. $B_i$ is an ancestor of every $B_j$ for every $i \neq j$ and
2. each ancestor $C$ of $B_i$ is an ancestor of $A$ or $A$ itself.
Let us first prove that (1) and (2) are necessary conditions to avoid parallelism loss. If (1) is not respected, then there exists a child $B_j$ of $A$ which is not a descendant of $B_i$. If $A$ and $B_i$ are grouped, then workflow parallelism is broken between $B_i$ and $B_j$ because $B_j$ has to wait for $B_i$ to complete before starting. Similarly, if (2) is not respected, then there exists an ancestor $C$ of $B_i$ that is not an ancestor of $A$ and workflow parallelism is broken between $A$ and $C$ when $A$ and $B_i$ are grouped.
(1) and (2) are also sufficient to avoid any parallelism break in the workflow.
Let us first notice that grouping services does not break data parallelism because this kind of parallelism only concerns a single service of the workflow. Moreover, service parallelism relies on the independence of the processings of two different data segments by two successive services. As service grouping does not prevent $B_i$ from processing a given piece of data while $A$ is processing another one (assuming that data parallelism is not broken, which is the case here), service grouping does not break service parallelism. Thus, we are left to prove that (1) and (2) guarantee that workflow parallelism is not broken by grouping $A$ and $B_i$. (1) guarantees that there is no workflow parallelism between $B_i$ and every $B_j$. Workflow parallelism is thus likely to concern $B_i$ only for services that are not children of $A$ and thus cannot be broken by grouping $A$ and $B_i$. Similarly, (2) guarantees that there is no workflow parallelism between $A$ and every other ancestor of $B_i$. Workflow parallelism is thus likely to concern $A$ only for services that are not ancestors of $B_i$ and thus cannot be broken by grouping $A$ and $B_i$. ■
Our grouping strategy tests this rule for each MOTEUR service of the workflow. Groups of more than two services may be recursively composed by successive matches of the grouping rule.
The constraints applied by the matching rule are illustrated on three different grouping examples in figure 4. This simplified workflow was extracted from our medical imaging application (see figure 7). It is made of four MOTEUR services. As it can be seen from the workflow graph, the data dependencies will enforce a sequential execution of these four services. It is therefore expected that the four services are grouped into a single one in order to minimize the job submission overhead. On this figure, notations nearby the services correspond to the ones introduced in the grouping rule. For each of the 3 examples of figure 4, the grouping of the two services outlined by a blue box is studied:
(1) On the left of figure 4, the tested MOTEUR service $A$ is crestLines. $A$ is connected to the workflow inputs and it has two children: $B_0$ and $B_1$. $B_0$ is a father of $B_1$ and it only has as single ancestor which is $A$. Thus, the rule matches: $A$ and $B_0$ can be grouped. If there were a service $C$ ancestor of $B_0$ but not of $A$ as represented on the figure, the rule would not match: $A$ and $C$ would have to be executed in parallel before starting $B_0$. Similarly, if there were a service $D$ child of $A$ but not of $B_0$, then the rule would not match as the workflow manager would need to communicate results during the execution of the grouped jobs in order to allow workflow parallelism between $B_0$ and $D$.
(2) In the middle of figure 4, the tested service $A$ is now crestMatch. $A$ has a single child: $B_0$. $B_0$ has two ancestors, $A$ and $C$. The rule matches because $C$ is an ancestor of $A$. $A$ and $B_0$ can then be grouped.
(3) On the right of figure 4, $A$ is the PFMatch service. It has only one child $B_0$ which only has a single ancestor, $A$. The rule matches and those services
Fig. 4. Service grouping examples. On this workflow, the grouping rule matches 3 times (once for each green box), thus resulting in a single service wrapping those 4. On the left part of the figure, service C or D would prevent the grouping between crestLines and crestMatch because it would break workflow parallelism between A and C and between $B_0$ and D.
can thus be grouped.
Finally, when $A$ is the PFRegister service, the grouping rule does not match because it does not have any child. Note that in this example, the recursive grouping strategy leads to a single job submission, as expected.
5.2 Dynamic generic service factory
In practice, grouping jobs in the task-based approach is straightforward whereas it is usually not possible in the service-based approach given that:
- the services composing the workflow are totally independent from each other
- the grid infrastructure handling the jobs does not have any information concerning the workflow and the job dependencies.
That is why a new architecture has to be designed to allow service grouping.
To do that, an advantage of the SOA design of our workflow engine is that it can dynamically enable service grouping by analyzing the workflow and generating grouped services on the fly. A service factory is added to the architecture. Its role is to instantiate both the legacy code wrapping services and the grouped services. The complete architecture is diagrammed on figure 5.
Fig. 5. Services factory enabling service grouping. The MOTEUR factory is able to deploy a Web-Service from the description of an executable (see figure 1 for an example of such a description). To group services, the workflow engine (MOTEUR) dynamically invokes the services factory with the description of the algorithms to group (DESC(P1) and DESC(P2)). The factory then deploys a composite Web-Service P1+P2 that can be directly invoked by the workflow engine.
The service factory is responsible for dynamically generating and deploying application services. The aim of this factory is to achieve two antagonist goals:
- To expose legacy codes as autonomous Web-Services respecting the main principles of SOA.
- To enable the grouping of two of these Web-Services as a unique one for optimizing the execution.
On one hand, the specific Web-Service implementation details (i.e. the execution of the wrapped code on a grid infrastructure) are hidden to the consumer. On the other hand, when the consumer is a workflow manager which can group jobs, it needs to be aware of the real nature the Web-Services (the encapsulation of a MOTEUR descriptor) so that it could merge them at run time. We choose to use the WSDL XML Format extension mechanism which allows to insert user defined XML elements in the WSDL content itself. We thus strictly conform to the WSDL standard while enabling our optimization strategy.
On figure 5, we exemplify the architecture through a usage scenario:
**R.1** First, the legacy code provider registers a MOTEUR XML descriptor P1 to the MOTEUR factory.
**G.1** The factory, then dynamically generates a Web-Service which wraps the submission of the legacy code to the grid via the generic service wrapper.
**R.2** Another provider do the same with the descriptor of P2.
The resulting Web-Services expose their WSDL contracts to the external world with a specific extension associated with the WSDL operation. For instance, the WSDL contract resulting of the deployment of the crestLines legacy code described on figure 1 is printed on figure 6. This WSDL document defines two types (CrestLines-request and CrestLines-response) corresponding to the descriptor inputs and outputs and a single Execute operation. Notice that in the binding section, the WSDL document contains an extra MOTEUR-descriptor tag pointing to the URL of the legacy code descriptor file (location) and a binding to the Execute operation (soap:operation).
Suppose now that the workflow manager identifies a service grouping optimization (e.g. P1 and P2, displayed in green in figure 5). Because of its ability to discover the extended nature of these two services, the engine can retrieve the two corresponding MOTEUR descriptors.
**C.1+2** The workflow manager can ask the factory to combine them and
**G.1+2** generate a single composite Web-Service which exposes an operation taking its inputs from P1 (and P2 inputs coming from other external services) and returning the outputs defined by P2 (and P1 outputs going to other external services).
Fig. 6. Extended WSDL generated by the factory for the code introduced in figure 1
I.1+2 The workflow manager can invoke this composite Web-Service. It is of the same type than any regular legacy code wrapping service and it is accessible through the same interface.
S.1+2 It also delegates the grid submission to the generic submission Web-Service by sending the composite MOTEUR descriptor and the input link of P1 and P2 in the workflow.
6 Experiments on a production grid
To quantify the speed-up introduced by service grouping on a real workflow, we made experiments on the EGEE production grid infrastructure. The EGEE system is a pool of thousands computers (standard PCs) and storage resources accessible through the gLite middleware. The resources are assembled in computing centers, each of them running its internal batch scheduler. Jobs are submitted from a user interface to a central Resource Broker which distributes them to the available resources. The access to EGEE grid resources is controlled for each Virtual Organizations (VOs). For our VO, about 3000 CPUs accessible through 25 batch queues were available at the time of the experiments. The large scale and multi-users nature of this infrastructure makes the overhead due to submission, scheduling and queuing time of the order of 5 to 10 minutes. Limiting job submissions by service grouping is therefore highly suitable on this kind of production infrastructure.
6.1 Experimental workflows
We made experiments on a medical image analysis application which is made from 6 legacy algorithms developed by the Asclepios team of INRIA Sophia-Antipolis [33,34]. The workflow of this application is represented on figure 7. It aims at assessing the accuracy of 4 registration algorithms, namely crestMatch, PFMatchICP/PFRegister, Baladin and Yasmina. A number of input image pairs constitute the input of the workflow (floating image and reference image). Those pairs are first registered by the crestMatch method and this result initializes the 3 remaining algorithms. At the end of the workflow, the MultiTransfoTest service is a statistical step that computes the accuracy of each algorithm from all the previously obtained results. crestLines is a preprocessing step for crestMatch and PFMatchICP. The total CPU time consumed by this workflow is about 15 minutes per input data set. For 126 input images, the CPU time is thus 31.5 hours, which motivates the use of grids for this application.
To show how service grouping is able to speed-up the execution on highly sequential applications, we also considered a sub-workflow of our application, as shown in figure 7. It is made of 4 services that correspond to the crestLines, crestMatch, PFMatchICP and PFRegister ones in the application workflow. Our grouping rule groups those 4 services into a single one, as it has been detailed in the example of figure 4. It is important to notice that even if this sub-workflow is sequential, and thus does not benefit from workflow parallelism, its execution on a grid does make sense because of data and service parallelisms. To evaluate the impact of our grouping strategy on the perfor-
Table 1
<table>
<thead>
<tr>
<th>Number of input image pairs</th>
<th>Sub-workflow (figure 4)</th>
<th>Whole application (figure 7)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Number of jobs</td>
<td>Speed-up</td>
</tr>
<tr>
<td></td>
<td>Regular</td>
<td>Grouping</td>
</tr>
<tr>
<td>12</td>
<td>48</td>
<td>12</td>
</tr>
<tr>
<td>66</td>
<td>264</td>
<td>66</td>
</tr>
<tr>
<td>126</td>
<td>504</td>
<td>126</td>
</tr>
</tbody>
</table>
Grouping strategy speed-ups.
Those experiments involved 3060 jobs, corresponding to a total CPU time of 4.5 days. For each number of data items, only a single workflow execution was performed. Yet, to guarantee that the grid status was the same between the regular (without grouping) and optimized strategies, we submitted those two cases simultaneously for each data set and using the same Resource Broker. The optimized and regular executions were thus compared in similar conditions.
The scheduling of the jobs submitted by these workflows is completely delegated to the EGEE grid middleware. On this infrastructure, two different levels of scheduling are performed. First, a particular computing center is selected by the Resource Broker, according to the job’s requirements and the load of the sites. Second, a batch scheduler is responsible for the node allocation at the site level. Consequently, we had no control on the scheduling and each job may be executed on any of the nodes available for our VO. Several grid sites may be used by a single workflow. The overall EGEE scheduling policy is not centrally defined but results from the interactions of largely autonomous policies.
6.2 Results
Table 1 presents the speed-ups induced by our grouping strategy for a growing number of input image pairs and for the two experimental workflows described above. This speed-up is computed as the ratio of a regular grid execution time (where each service invocation leads to a job submission) over the execution time using the grouping strategy. We can notice on those tables that service grouping does effectively provide a significant speed-up on the workflow execution. This speed-up is ranging from 1.23 to 2.91.
The speed-up values are greater on the sub-workflow than on the whole appli-
cation one. Indeed, on the sub-workflow, 4 services are grouped into a single one, thus saving 3 job submissions for each input data set. On the whole application workflow, the grouping rule is applied only twice, thus only saving 2 job submissions for each input data set, as depicted on figure 7.
The overhead of the service grouping optimization proposed in this paper remains negligible with respect to the grid overhead. Indeed, grouping services is done once for all the data items, at the beginning of the workflow execution. It only consists in searching for workflow services for which the rule described in section 5.1 matches. Thus, the overhead of service grouping is in the order of a few seconds, whereas the grid introduces an overhead of several minutes per job.
It is true that the service grouping results presented in this section are limited to the scope of our particular application. Investigating how the workflow topology and the nature of the submitted jobs would impact the speed-up values would be an interesting perspective to this work. Nonetheless, being able to forecast the performance of a workflow on a production grid such as EGEE is definitely a non-trivial problem. This kind of infrastructure is highly variable and non stationary so that deterministic models are hardly usable. We started investigating probabilistic models in order to be able to predict the impact of a given optimization on the execution time of a workflow (e.g. in [35]).
7 Conclusion
In this paper, we discussed the advantages of the service-oriented approach for enabling scientific application codes on a grid infrastructure. We described an application-independent non intrusive legacy code wrapper that works at run time, by interpreting a command-line description file. We designed a workflow manager SOA taking advantage of this wrapper to enact complex scientific applications on a grid. Any legacy code-based application can thus be instantiated by only defining textual MOTEUR descriptors.
We then introduced a workflow optimization strategy based on an extension of the wrapper. This strategy consists in grouping services that do not benefit from any parallelism in order to reduce the impact of the grid overhead. We took advantage of the flexibility of the workflow architecture to introduce a new service factory enabling dynamic and automated instantiation of legacy code wrapping and the service grouping.
We showed results on a real medical imaging application workflow deployed on the EGEE production grid infrastructure. Our grouping strategy is able to
Fig. 7. Workflow of the application. Services to be grouped are squared in blue. The extracted sub-workflow is grouped into a single service, as detailed on figure 4. Only crestLines, crestMatch, PFMatchICP, PFRegister, Yasmina and Baladin lead to a grid job submission. The other services are computed locally.
provide significant speed-ups in the range 1.2 to 2.9 on a real application. On more sequential workflows, the speed-up increases to almost 3.
It is important to notice that the grouping strategy presented in this chapter is very unlikely to slow down the application because it does not break any parallelism (even if it is true that some side-effects resulting from an increase of the job size may limit the expected speed-up). Future directions in service grouping could be to limit parallelism at some point, thus further reducing the number of submitted jobs and the impact of the grid overhead on the execution. In this case, a compromise would have to be found between parallelism loss and overhead reduction. We started investigating such a strategy in [36] where data parallelism is restricted in order to limit the impact of the latency. Breaking workflow parallelism to reduce the number of submitted jobs may also be envisaged.
Acknowledgments
This work is partially funded by the AGIR project (http://www.aci-agir.org/) from the French research program “ACI-Masse de données” and the GWEN-DIA project (http://gwendia.polytech.unice.fr, contract ANR-06-MDCA-009) from the French National Agency for scientific Research (ANR). We are grateful to the EGEE European project for providing the grid infrastructure and user assistance.
References
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00459808/file/glatard-montagnat-etal_2008a.pdf", "len_cl100k_base": 9977, "olmocr-version": "0.1.53", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 61632, "total-output-tokens": 13800, "length": "2e13", "weborganizer": {"__label__adult": 0.0003659725189208984, "__label__art_design": 0.0005388259887695312, "__label__crime_law": 0.000316619873046875, "__label__education_jobs": 0.0012292861938476562, "__label__entertainment": 0.00011551380157470704, "__label__fashion_beauty": 0.0002123117446899414, "__label__finance_business": 0.0005512237548828125, "__label__food_dining": 0.0003681182861328125, "__label__games": 0.0006103515625, "__label__hardware": 0.001964569091796875, "__label__health": 0.001056671142578125, "__label__history": 0.0004940032958984375, "__label__home_hobbies": 0.0001270771026611328, "__label__industrial": 0.0007448196411132812, "__label__literature": 0.000316619873046875, "__label__politics": 0.0003235340118408203, "__label__religion": 0.000606536865234375, "__label__science_tech": 0.235107421875, "__label__social_life": 0.00010907649993896484, "__label__software": 0.0178070068359375, "__label__software_dev": 0.7353515625, "__label__sports_fitness": 0.00033402442932128906, "__label__transportation": 0.0008087158203125, "__label__travel": 0.0002880096435546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56375, 0.02778]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56375, 0.23645]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56375, 0.89304]], "google_gemma-3-12b-it_contains_pii": [[0, 1137, false], [1137, 2897, null], [2897, 5478, null], [5478, 7925, null], [7925, 10636, null], [10636, 13258, null], [13258, 15853, null], [15853, 18262, null], [18262, 21081, null], [21081, 23311, null], [23311, 25569, null], [25569, 26441, null], [26441, 27475, null], [27475, 30034, null], [30034, 31713, null], [31713, 34889, null], [34889, 36342, null], [36342, 36806, null], [36806, 39399, null], [39399, 39842, null], [39842, 42560, null], [42560, 45011, null], [45011, 47600, null], [47600, 48854, null], [48854, 50904, null], [50904, 53458, null], [53458, 56116, null], [56116, 56375, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1137, true], [1137, 2897, null], [2897, 5478, null], [5478, 7925, null], [7925, 10636, null], [10636, 13258, null], [13258, 15853, null], [15853, 18262, null], [18262, 21081, null], [21081, 23311, null], [23311, 25569, null], [25569, 26441, null], [26441, 27475, null], [27475, 30034, null], [30034, 31713, null], [31713, 34889, null], [34889, 36342, null], [36342, 36806, null], [36806, 39399, null], [39399, 39842, null], [39842, 42560, null], [42560, 45011, null], [45011, 47600, null], [47600, 48854, null], [48854, 50904, null], [50904, 53458, null], [53458, 56116, null], [56116, 56375, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56375, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56375, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56375, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56375, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56375, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56375, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56375, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56375, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56375, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56375, null]], "pdf_page_numbers": [[0, 1137, 1], [1137, 2897, 2], [2897, 5478, 3], [5478, 7925, 4], [7925, 10636, 5], [10636, 13258, 6], [13258, 15853, 7], [15853, 18262, 8], [18262, 21081, 9], [21081, 23311, 10], [23311, 25569, 11], [25569, 26441, 12], [26441, 27475, 13], [27475, 30034, 14], [30034, 31713, 15], [31713, 34889, 16], [34889, 36342, 17], [36342, 36806, 18], [36806, 39399, 19], [39399, 39842, 20], [39842, 42560, 21], [42560, 45011, 22], [45011, 47600, 23], [47600, 48854, 24], [48854, 50904, 25], [50904, 53458, 26], [53458, 56116, 27], [56116, 56375, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56375, 0.02823]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
272900ae86002496a014db924645380e9a9deca1
|
MLRun Pipeline with Iguazio
NetApp Solutions
NetApp
July 24, 2024
MLRun Pipeline with Iguazio
TR-4834: NetApp and Iguazio for MLRun Pipeline
Rick Huang, David Arnette, NetApp
Marcelo Litovsky, Iguazio
This document covers the details of the MLRun pipeline using NetApp ONTAP AI, NetApp AI Control Plane, NetApp Cloud Volumes software, and the Iguazio Data Science Platform. We used Nuclio serverless function, Kubernetes Persistent Volumes, NetApp Cloud Volumes, NetApp Snapshot copies, Grafana dashboard, and other services on the Iguazio platform to build an end-to-end data pipeline for the simulation of network failure detection. We integrated Iguazio and NetApp technologies to enable fast model deployment, data replication, and production monitoring capabilities on premises as well as in the cloud.
The work of a data scientist should be focused on the training and tuning of machine learning (ML) and artificial intelligence (AI) models. However, according to research by Google, data scientists spend ~80% of their time figuring out how to make their models work with enterprise applications and run at scale, as shown in the following image depicting model development in the AI/ML workflow.
To manage end-to-end AI/ML projects, a wider understanding of enterprise components is needed. Although DevOps have taken over the definition, integration, and deployment these types of components, machine learning operations target a similar flow that includes AI/ML projects. To get an idea of what an end-to-end AI/ML pipeline touches in the enterprise, see the following list of required components:
- Storage
In this paper, we demonstrate how the partnership between NetApp and Iguazio drastically simplifies the development of an end-to-end AI/ML pipeline. This simplification accelerates the time to market for all of your AI/ML applications.
**Target Audience**
The world of data science touches multiple disciplines in information technology and business.
- The data scientist needs the flexibility to use their tools and libraries of choice.
- The data engineer needs to know how the data flows and where it resides.
- A DevOps engineer needs the tools to integrate new AI/ML applications into their CI/CD pipelines.
- Business users want to have access to AI/ML applications. We describe how NetApp and Iguazio help each of these roles bring value to business with our platforms.
**Solution Overview**
This solution follows the lifecycle of an AI/ML application. We start with the work of data scientists to define the different steps needed to prep data and train and deploy models. We follow with the work needed to create a full pipeline with the ability to track artifacts, experiment with execution, and deploy to Kubeflow. To complete the full cycle, we integrate the pipeline with NetApp Cloud Volumes to enable data versioning, as seen in the following image.
Technology Overview
This article provides an overview of the solution for MLRun pipeline using NetApp ONTAP AI, NetApp AI Control Plane, NetApp Cloud Volumes software, and the Iguazio Data Science Platform.
NetApp Overview
NetApp is the data authority for the hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with our partners, NetApp empowers global organizations to unleash the full potential of their data to expand customer touch points, foster greater innovation, and optimize their operations.
NetApp ONTAP AI
NetApp ONTAP AI, powered by NVIDIA DGX systems and NetApp cloud-connected all-flash storage, streamlines the flow of data reliably and speeds up analytics, training, and inference with your data fabric that spans from edge to core to cloud. It gives IT organizations an architecture that provides the following benefits:
- Eliminates design complexities
- Allows independent scaling of compute and storage
- Enables customers to start small and scale seamlessly
- Offers a range of storage options for various performance and cost points
NetApp ONTAP AI offers converged infrastructure stacks incorporating NVIDIA DGX-1, a petaflop-scale AI system, and NVIDIA Mellanox high-performance Ethernet switches to unify AI workloads, simplify deployment, and accelerate ROI. We leveraged ONTAP AI with one DGX-1 and NetApp AFF A800 storage system for this technical
NetApp AI Control Plane
The NetApp AI Control Plane enables you to unleash AI and ML with a solution that offers extreme scalability, streamlined deployment, and nonstop data availability. The AI Control Plane solution integrates Kubernetes and Kubeflow with a data fabric enabled by NetApp. Kubernetes, the industry-standard container orchestration platform for cloud-native deployments, enables workload scalability and portability. Kubeflow is an open-source machine-learning platform that simplifies management and deployment, enabling developers to do more data science in less time. A data fabric enabled by NetApp offers uncompromising data availability and portability to make sure that your data is accessible across the pipeline, from edge to core to cloud. This technical report uses the NetApp AI Control Plane in an MLRun pipeline. The following image shows Kubernetes cluster management page where you can have different endpoints for each cluster. We connected NFS Persistent Volumes to the Kubernetes cluster, and the following images show an Persistent Volume connected to the cluster, where NetApp Trident offers persistent storage support and data management capabilities.
### 4 Kubernetes Clusters
<table>
<thead>
<tr>
<th>Cluster</th>
<th>Cluster Endpoint</th>
<th>Cluster Version</th>
<th>Trident Version</th>
<th>Working Environments</th>
</tr>
</thead>
<tbody>
<tr>
<td>kubernetes</td>
<td><a href="https://3.20.111.39:6443">https://3.20.111.39:6443</a></td>
<td>v1.15.5</td>
<td>19.07.1</td>
<td>0</td>
</tr>
<tr>
<td>kubernetes</td>
<td><a href="https://172.31.14.31:6443">https://172.31.14.31:6443</a></td>
<td>v1.15.5</td>
<td>19.07.1</td>
<td>1</td>
</tr>
</tbody>
</table>
### Persistent Volumes for Kubernetes
**Connected with Kubernetes Cluster**
Cloud Volumes ONTAP is connected to 1 Kubernetes cluster. View Cluster 🔄
You can connect another Kubernetes cluster to this Cloud Volumes ONTAP system. If the Kubernetes cluster is in a different network than Cloud Volumes ONTAP, specify a custom export policy to provide access to clients.
#### Kubernetes Cluster
Select Kubernetes Cluster
- kubernetes
#### Custom Export Policy (Optional)
- Custom Export Policy
- 172.31.0.0/16
- Set as default storage class
- NFS
- iSCSI
[Connect] [Cancel]
The Iguazio Data Science Platform is a fully integrated and secure data-science platform as a service (PaaS) that simplifies development, accelerates performance, facilitates collaboration, and addresses operational challenges. This platform incorporates the following components, and the Iguazio Data Science Platform is presented in the following image:
- A data-science workbench that includes Jupyter Notebooks, integrated analytics engines, and Python packages
- Model management with experiments tracking and automated pipeline capabilities
- Managed data and ML services over a scalable Kubernetes cluster
- Nuclio, a real-time serverless functions framework
- An extremely fast and secure data layer that supports SQL, NoSQL, time-series databases, files (simple objects), and streaming
- Integration with third-party data sources such as NetApp, Amazon S3, HDFS, SQL databases, and streaming or messaging protocols
- Real-time dashboards based on Grafana
### Volumes
<table>
<thead>
<tr>
<th>Volumes</th>
<th>300 GB Allocated</th>
<th>1.43 GB Total Used</th>
</tr>
</thead>
</table>
<table>
<thead>
<tr>
<th>INFO</th>
<th>CAPACITY</th>
</tr>
</thead>
<tbody>
<tr>
<td>Disk Type</td>
<td>GP2</td>
</tr>
<tr>
<td>Tiering Policy</td>
<td>None</td>
</tr>
<tr>
<td>Backup</td>
<td>OFF</td>
</tr>
<tr>
<td></td>
<td>1.25 GB EBS Used</td>
</tr>
</tbody>
</table>
**Iguazio Overview**
The Iguazio Data Science Platform is a fully integrated and secure data-science platform as a service (PaaS) that simplifies development, accelerates performance, facilitates collaboration, and addresses operational challenges. This platform incorporates the following components, and the Iguazio Data Science Platform is presented in the following image:
- A data-science workbench that includes Jupyter Notebooks, integrated analytics engines, and Python packages
- Model management with experiments tracking and automated pipeline capabilities
- Managed data and ML services over a scalable Kubernetes cluster
- Nuclio, a real-time serverless functions framework
- An extremely fast and secure data layer that supports SQL, NoSQL, time-series databases, files (simple objects), and streaming
- Integration with third-party data sources such as NetApp, Amazon S3, HDFS, SQL databases, and streaming or messaging protocols
- Real-time dashboards based on Grafana
Software and Hardware Requirements
This article defines the hardware requirements that must be met in order to deploy this solution.
Network Configuration
The following is the network configuration requirement for setting up in the cloud:
- The Iguazio cluster and NetApp Cloud Volumes must be in the same virtual private cloud.
- The cloud manager must have access to port 6443 on the Iguazio app nodes.
- We used Amazon Web Services in this technical report. However, users have the option of deploying the solution in any Cloud provider. For on-premises testing in ONTAP AI with NVIDIA DGX-1, we used the Iguazio hosted DNS service for convenience.
Clients must be able to access dynamically created DNS domains. Customers can use their own DNS if desired.
Hardware Requirements
You can install Iguazio on-premises in your own cluster. We have verified the solution in NetApp ONTAP AI with an NVIDIA DGX-1 system. The following table lists the hardware used to test this solution.
<table>
<thead>
<tr>
<th>Hardware</th>
<th>Quantity</th>
</tr>
</thead>
<tbody>
<tr>
<td>DGX-1 systems</td>
<td>1</td>
</tr>
<tr>
<td>NetApp AFF A800 system</td>
<td>1 high-availability (HA) pair, includes 2 controllers and 48 NVMe SSDs (3.8TB or above)</td>
</tr>
<tr>
<td>Cisco Nexus 3232C network switches</td>
<td>2</td>
</tr>
</tbody>
</table>
The following table lists the software components required for on-premise testing:
<table>
<thead>
<tr>
<th>Software</th>
<th>Version or Other Information</th>
</tr>
</thead>
<tbody>
<tr>
<td>NetApp ONTAP data management software</td>
<td>9.7</td>
</tr>
<tr>
<td>Cisco NX-OS switch firmware</td>
<td>7.0(3)i6(1)</td>
</tr>
<tr>
<td>NVIDIA DGX OS</td>
<td>4.4 - Ubuntu 18.04 LTS</td>
</tr>
<tr>
<td>Docker container platform</td>
<td>19.03.5</td>
</tr>
<tr>
<td>Container version</td>
<td>20.01-tf1-py2</td>
</tr>
<tr>
<td>Machine learning framework</td>
<td>TensorFlow 1.15.0</td>
</tr>
<tr>
<td>Iguazio</td>
<td>Version 2.8+</td>
</tr>
<tr>
<td>ESX Server</td>
<td>6.5</td>
</tr>
</tbody>
</table>
This solution was fully tested with Iguazio version 2.5 and NetApp Cloud Volumes ONTAP for AWS. The Iguazio cluster and NetApp software are both running on AWS.
<table>
<thead>
<tr>
<th>Software</th>
<th>Version or Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>Iguazio</td>
<td>Version 2.8+</td>
</tr>
<tr>
<td>App node</td>
<td>M5.4xlarge</td>
</tr>
<tr>
<td>Data node</td>
<td>I3.4xlarge</td>
</tr>
</tbody>
</table>
**Network Device Failure Prediction Use Case Summary**
This use case is based on an Iguazio customer in the telecommunications space in Asia. With 100K enterprise customers and 125k network outage events per year, there was a critical need to predict and take proactive action to prevent network failures from affecting customers. This solution provided them with the following benefits:
- Predictive analytics for network failures
- Integration with a ticketing system
- Taking proactive action to prevent network failures
As a result of this implementation of Iguazio, 60% of failures were proactively prevented.
**Setup Overview**
Iguazio can be installed on-premises or on a cloud provider.
**Iguazio Installation**
Provisioning can be done as a service and managed by Iguazio or by the customer. In both cases, Iguazio provides a deployment application (Provazio) to deploy and manage clusters.
For on-premises installation, please refer to [NVA-1121](#) for compute, network, and storage setup. On-premises deployment of Iguazio is provided by Iguazio without additional cost to the customer. See [this page](#) for DNS
Configuring Kubernetes Cluster
This section is divided into two parts for cloud and on-premises deployment respectively.
Cloud Deployment Kubernetes Configuration
Through NetApp Cloud Manager, you can define the connection to the Iguazio Kubernetes cluster. Trident requires access to multiple resources in the cluster to make the volume available.
1. To enable access, obtain the Kubernetes config file from one the Iguazio nodes. The file is located under `/home/Iguazio/.kube/config`. Download this file to your desktop.
2. Go to Discover Cluster to configure.
3. Upload the Kubernetes config file. See the following image.
**Upload Kubernetes Configuration File**
Upload the Kubernetes configuration file (kubeconfig) so Cloud Manager can install Trident on the Kubernetes cluster.
Connecting Cloud Volumes ONTAP with a Kubernetes cluster enables users to request and manage persistent volumes using native Kubernetes interfaces and constructs. Users can take advantage of ONTAP’s advanced data management features without having to know anything about it. Storage provisioning is enabled by using NetApp Trident. Learn more about Trident for Kubernetes.
4. Deploy Trident and associate a volume with the cluster. See the following image on defining and assigning a Persistent Volume to the Iguazio cluster. This process creates a Persistent Volume (PV) in Iguazio's Kubernetes cluster. Before you can use it, you must define a Persistent Volume Claim (PVC).
On-Premises Deployment Kubernetes Configuration
For on-premises installation of NetApp Trident, see TR-4798 for details. After configuring your Kubernetes cluster and installing NetApp Trident, you can connect Trident to the Iguazio cluster to enable NetApp data management capabilities, such as taking Snapshot copies of your data and model.
Define Persistent Volume Claim
This article demonstrates how to define a persistent volume claim on a Jupyter notebook.
1. Save the following YAML to a file to create a PVC of type Basic.
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: basic
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: netapp-file
```
2. Apply the YAML file to your Iguazio Kubernetes cluster.
Kubectl -n default-tenant apply -f <your yaml file>
**Attach NetApp Volume to the Jupyter Notebook**
Iguazio offers several managed services to provide data scientists with a full end-to-end stack for development and deployment of AI/ML applications. You can read more about these components at the *Iguazio Overview of Application Services and Tools*.
One of the managed services is Jupyter Notebook. Each developer gets its own deployment of a notebook container with the resources they need for development. To give them access to the NetApp Cloud Volume, you can assign the volume to their container and resource allocation, running user, and environment variable settings for Persistent Volume Claims is presented in the following image.
For an on-premises configuration, you can refer to TR-4798 on the Trident setup to enable NetApp ONTAP data management capabilities, such as taking Snapshot copies of your data or model for versioning control. Add the following line in your Trident back-end config file to make Snapshot directories visible:
```json
{
...
"defaults": {
"snapshotDir": "true"
}
}
```
You must create a Trident back-end config file in JSON format, and then run the following *Trident command* to reference it:
```
tridentctl create backend -f <backend-file>
```
Deploying the Application
The following sections describe how to install and deploy the application.
Get Code from GitHub
Now that the NetApp Cloud Volume or NetApp Trident volume is available to the Iguazio cluster and the developer environment, you can start reviewing the application.
Users have their own workspace (directory). On every notebook, the path to the user directory is /User. The Iguazio platform manages the directory. If you follow the instructions above, the NetApp Cloud volume is available in the /netapp directory.
Get the code from GitHub using a Jupyter terminal.
At the Jupyter terminal prompt, clone the project.
```
cd /User
git clone .
```
You should now see the netops-netapp folder on the file tree in Jupyter workspace.
Configure Working Environment
Copy the Notebook set_env-Example.ipynb as set_env.ipynb. Open and edit set_env.ipynb. This notebook sets variables for credentials, file locations, and execution drivers.
If you follow the instructions above, the following steps are the only changes to make:
1. Obtain this value from the Iguazio services dashboard: `docker_registry`
**Example:** `docker-registry.default-tenant.app.clusterq.iguaziodev.com:80`
2. Change `admin` to your Iguazio username:
`IGZ_CONTAINER_PATH = '/users/admin'`
The following are the ONTAP system connection details. Include the volume name that was generated when Trident was installed. The following setting is for an on-premises ONTAP cluster:
```python
ontapClusterMgmtHostname = '0.0.0.0'
ontapClusterAdminUsername = 'USER'
ontapClusterAdminPassword = 'PASSWORD'
sourceVolumeName = 'SOURCE VOLUME'
```
The following setting is for Cloud Volumes ONTAP:
```python
MANAGER=ontapClusterMgmtHostname
svm='svm'
email='email'
password=ontapClusterAdminPassword
weid="weid"
volume=sourceVolumeName
```
Create Base Docker Images
Everything you need to build an ML pipeline is included in the Iguazio platform. The developer can define the specifications of the Docker images required to run the pipeline and execute the image creation from Jupyter Notebook. Open the notebook `create_images.ipynb` and Run All Cells.
This notebook creates two images that we use in the pipeline.
- **iguazio/netapp.** Used to handle ML tasks.
**Create image for training pipeline**
```python
fn.build_config(image='docker_registry:/iguazio/netapp', commands=['pip install \n v1io_frames fsspec==0.3.3 PyYAML==5.1.2 pyarrow==0.15.1 pandas==0.25.3 matplotlib seaborn yellowbr
fn.deploy()
```
- **netapp/pipeline.** Contains utilities to handle NetApp Snapshot copies.
**Create image for Ontap utilitites**
```python
fn.build_config(image='docker_registry:/netapp/pipeline:latest', commands=['apt -y update';
'pip install v1io_frames netapp_ontap
fn.deploy()
```
Review Individual Jupyter Notebooks
The following table lists the libraries and frameworks we used to build this task. All these components have been fully integrated with Iguazio’s role-based access and security controls.
<table>
<thead>
<tr>
<th>Libraries/Framework</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>MLRun</td>
<td>An managed by Iguazio to enable the assembly, execution, and monitoring of an ML/AI pipeline.</td>
</tr>
<tr>
<td>Nuclio</td>
<td>A serverless functions framework integrated with Iguazio. Also available as an open-source project managed by Iguazio.</td>
</tr>
<tr>
<td>Kubeflow</td>
<td>A Kubernetes-based framework to deploy the pipeline. This is also an open-source project to which Iguazio contributes. It is integrated with Iguazio for added security and integration with the rest of the infrastructure.</td>
</tr>
<tr>
<td>Docker</td>
<td>A Docker registry run as a service in the Iguazio platform. You can also change this to connect to your registry.</td>
</tr>
<tr>
<td>NetApp Cloud Volumes</td>
<td>Cloud Volumes running on AWS give us access to large amounts of data and the ability to take Snapshot copies to version the datasets used for training.</td>
</tr>
<tr>
<td>Trident</td>
<td>Trident is an open-source project managed by NetApp. It facilitates the integration with storage and compute resources in Kubernetes.</td>
</tr>
</tbody>
</table>
We used several notebooks to construct the ML pipeline. Each notebook can be tested individually before being brought together in the pipeline. We cover each notebook individually following the deployment flow of this demonstration application.
The desired result is a pipeline that trains a model based on a Snapshot copy of the data and deploys the model for inference. A block diagram of a completed MLRun pipeline is shown in the following image.
Deploy Data Generation Function
This section describes how we used Nuclio serverless functions to generate network device data. The use case is adapted from an Iguazio client that deployed the pipeline and used Iguazio services to monitor and predict network device failures.
We simulated data coming from network devices. Executing the Jupyter notebook `data-generator.ipynb` creates a serverless function that runs every 10 minutes and generates a Parquet file with new data. To deploy the function, run all the cells in this notebook. See the Nuclio website to review any unfamiliar components in this notebook.
A cell with the following comment is ignored when generating the function. Every cell in the notebook is assumed to be part of the function. Import the Nuclio module to enable `%nuclio magic`.
```
# nuclio: ignore
import nuclio
```
In the spec for the function, we defined the environment in which the function executes, how it is triggered, and the resources it consumes.
The `init_context` function is invoked by the Nuclio framework upon initialization of the function.
```python
def init_context(context):
...
```
Any code not in a function is invoked when the function initializes. When you invoke it, a handler function is executed. You can change the name of the handler and specify it in the function spec.
```python
def handler(context, event):
...
```
You can test the function from the notebook prior to deployment.
```python
%%time
# nuclio: ignore
init_context(context)
event = nuclio.Event(body='')
output = handler(context, event)
output
```
The function can be deployed from the notebook or it can be deployed from a CI/CD pipeline (adapting this code).
```python
addr = nuclio.deploy_file(name='generator', project='netops', spec=spec, tag='v1.1')
```
**Pipeline Notebooks**
These notebooks are not meant to be executed individually for this setup. This is just a review of each notebook. We invoked them as part of the pipeline. To execute them individually, review the MLRun documentation to execute them as Kubernetes jobs.
**snap_cv.ipynb**
This notebook handles the Cloud Volume Snapshot copies at the beginning of the pipeline. It passes the name of the volume to the pipeline context. This notebook invokes a shell script to handle the Snapshot copy. While running in the pipeline, the execution context contains variables to help locate all files needed for execution.
While writing this code, the developer does not have to worry about the file location in the container that executes it. As described later, this application is deployed with all its dependencies, and it is the definition of the pipeline parameters that provides the execution context.
```
class = os.path.join(context.get_param('APP_DIR'),"snap_cv.sh")
```
The created Snapshot copy location is placed in the MLRun context to be consumed by steps in the pipeline.
```
context.log_result('snapVolumeDetails',snap_path)
```
The next three notebooks are run in parallel.
**data-prep.ipynb**
Raw metrics must be turned into features to enable model training. This notebook reads the raw metrics from the Snapshot directory and writes the features for model training to the NetApp volume.
When running in the context of the pipeline, the input `DATA_DIR` contains the Snapshot copy location.
```
metrics_table = os.path.join(str(mlruncontext.get_input('DATA_DIR',os.getenv('DATA_DIR','/netpp'))),
mlruncontext.get_param('metrics_table',os.getenv('metrics_table','netops_metrics_parquet')))
```
**describe.ipynb**
To visualize the incoming metrics, we deploy a pipeline step that provides plots and graphs that are available through the Kubeflow and MLRun UIs. Each execution has its own version of this visualization tool.
```
ax.set_title("features correlation")
plt.savefig(os.path.join(base_path, "plots/corr.png"))
context.log_artifact(PlotArtifact("correlation", body=plt.gcf()),
local_path="plots/corr.html")
```
**deploy-feature-function.ipynb**
We continuously monitor the metrics looking for anomalies. This notebook creates a serverless function that generates the features need to run prediction on incoming metrics. This notebook invokes the creation of the function. The function code is in the notebook `data-prep.ipynb`. Notice that we use the same notebook as a step in the pipeline for this purpose.
**training.ipynb**
After we create the features, we trigger the model training. The output of this step is the model to be used for inferencing. We also collect statistics to keep track of each execution (experiment).
For example, the following command enters the accuracy score into the context for that experiment. This value
is visible in Kubeflow and MLRun.
```python
context.log_result('accuracy', score)
```
deploy-inference-function.ipynb
The last step in the pipeline is to deploy the model as a serverless function for continuous inferencing. This notebook invokes the creation of the serverless function defined in `nuclio-inference-function.ipynb`.
Review and Build Pipeline
The combination of running all the notebooks in a pipeline enables the continuous run of experiments to reassess the accuracy of the model against new metrics. First, open the `pipeline.ipynb` notebook. We take you through details that show how NetApp and Iguazio simplify the deployment of this ML pipeline.
We use MLRun to provide context and handle resource allocation to each step of the pipeline. The MLRun API service runs in the Iguazio platform and is the point of interaction with Kubernetes resources. Each developer cannot directly request resources; the API handles the requests and enables access controls.
```python
# MLRun API connection definition
mlconf.dbpath = 'http://mlrun-api:8080'
```
The pipeline can work with NetApp Cloud Volumes and on-premises volumes. We built this demonstration to use Cloud Volumes, but you can see in the code the option to run on-premises.
The first action needed to turn a Jupyter notebook into a Kubeflow step is to turn the code into a function. A function has all the specifications required to run that notebook. As you scroll down the notebook, you can see that we define a function for every step in the pipeline.
<table>
<thead>
<tr>
<th>Part of the Notebook</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code_to_function></td>
<td>Name of the function: Project name. used to organize all project artifacts. This is visible in the MLRun UI. Kind. In this case, a Kubernetes job. This could be Dask, mpi, sparkk8s, and more. See the MLRun documentation for more details. File. The name of the notebook. This can also be a location in Git (HTTP).</td>
</tr>
<tr>
<td>image</td>
<td>The name of the Docker image we are using for this step. We created this earlier with the create-image.ipynb notebook.</td>
</tr>
<tr>
<td>volume_mounts & volumes</td>
<td>Details to mount the NetApp Cloud Volume at run time.</td>
</tr>
</tbody>
</table>
We also define parameters for the steps.
After you have the function definition for all steps, you can construct the pipeline. We use the `kfp` module to make this definition. The difference between using MLRun and building on your own is the simplification and shortening of the coding.
The functions we defined are turned into step components using the `as_step` function of MLRun.
**Snapshot Step Definition**
Initiate a Snapshot function, output, and mount v3io as source:
```python
snap = snapfn.as_step(NewTask(handler='handler',params=snap_params),
name='NetApp_Cloud_Volume_Snapshot',outputs=['snapVolumeDetails','training_parquet_file']).apply(mount_v3io())
```
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Details</th>
</tr>
</thead>
<tbody>
<tr>
<td>NewTask</td>
<td>NewTask is the definition of the function run.</td>
</tr>
<tr>
<td>(MLRun module)</td>
<td>Handler. Name of the Python function to invoke. We used the name handler in the notebook, but it is not required. params. The parameters we passed to the execution. Inside our code, we use context.get_param ('PARAMETER') to get the values.</td>
</tr>
<tr>
<td>Parameters</td>
<td>Details</td>
</tr>
<tr>
<td>---------------</td>
<td>---------</td>
</tr>
<tr>
<td>as_step</td>
<td>Name. Name of the Kubeflow pipeline step.</td>
</tr>
<tr>
<td>outputs</td>
<td>outputs. These are the values that the step adds to the dictionary on completion. Take a look at the snap_cv.ipynb notebook.</td>
</tr>
<tr>
<td></td>
<td>mount_v3io(). This configures the step to mount /User for the user executing the pipeline.</td>
</tr>
</tbody>
</table>
```python
prep = data_prep.as_step(name='data-prep',
handler='handler',
params=params,
inputs = {'DATA_DIR': snap.outputs['snapVolumeDetails']},
out_path=artifacts_path).apply(mount_v3io()).after(snap)
```
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Details</th>
</tr>
</thead>
<tbody>
<tr>
<td>inputs</td>
<td>You can pass to a step the outputs of a previous step. In this case, snap.outputs['snapVolumeDetails'] is the name of the Snapshot copy we created on the snap step.</td>
</tr>
<tr>
<td>out_path</td>
<td>A location to place artifacts generating using the MLRun module log_artifacts.</td>
</tr>
</tbody>
</table>
You can run pipeline.ipynb from top to bottom. You can then go to the Pipelines tab from the Iguazio dashboard to monitor progress as seen in the Iguazio dashboard Pipelines tab.
Because we logged the accuracy of training step in every run, we have a record of accuracy for each experiment, as seen in the record of training accuracy.
<table>
<thead>
<tr>
<th>Run name</th>
<th>Status</th>
<th>Duration</th>
<th>Pipeline Version</th>
<th>Recurring</th>
<th>Start time</th>
<th>accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td>xgb_pipeline 2020-03-24 18-51-...</td>
<td>✔️</td>
<td>0:08:43</td>
<td>[View pipeline]</td>
<td>-</td>
<td>3/24/2020, 2:51:09 PM</td>
<td>0.985</td>
</tr>
<tr>
<td>xgb_pipeline 2020-03-19 13-31-...</td>
<td>✔️</td>
<td>0:08:14</td>
<td>[View pipeline]</td>
<td>-</td>
<td>3/19/2020, 9:31:19 AM</td>
<td>0.980</td>
</tr>
<tr>
<td>xgb_pipeline 2020-03-18 12-56-...</td>
<td>✔️</td>
<td>0:08:11</td>
<td>[View pipeline]</td>
<td>-</td>
<td>3/19/2020, 8:56:08 AM</td>
<td>0.980</td>
</tr>
<tr>
<td>xgb_pipeline 2020-03-17 19-49-...</td>
<td>✔️</td>
<td>0:08:03</td>
<td>[View pipeline]</td>
<td>-</td>
<td>3/17/2020, 3:40:31 PM</td>
<td>0.985</td>
</tr>
<tr>
<td>xgb_pipeline 2020-03-17 18-34-...</td>
<td>✔️</td>
<td>0:05:54</td>
<td>[View pipeline]</td>
<td>-</td>
<td>3/17/2020, 2:34:56 PM</td>
<td>0.980</td>
</tr>
<tr>
<td>xgb_pipeline 2020-03-17 17-34-...</td>
<td>✔️</td>
<td>0:04:48</td>
<td>[View pipeline]</td>
<td>-</td>
<td>3/17/2020, 1:34:16 PM</td>
<td>0.982</td>
</tr>
<tr>
<td>xgb_pipeline 2020-03-17 17-01-...</td>
<td>✔️</td>
<td>0:05:25</td>
<td>[View pipeline]</td>
<td>-</td>
<td>3/17/2020, 1:01:56 PM</td>
<td>0.987</td>
</tr>
<tr>
<td>xgb_pipeline 2020-03-16 16-47-...</td>
<td>✔️</td>
<td>0:06:08</td>
<td>[View pipeline]</td>
<td>-</td>
<td>3/16/2020, 12:47:19 ...</td>
<td>0.963</td>
</tr>
<tr>
<td>xgb_pipeline 2020-03-16 13-57-...</td>
<td>✔️</td>
<td>0:05:18</td>
<td>[View pipeline]</td>
<td>-</td>
<td>3/16/2020, 9:57:03 AM</td>
<td>0.980</td>
</tr>
</tbody>
</table>
If you select the Snapshot step, you can see the name of the Snapshot copy that was used to run this experiment.
The described step has visual artifacts to explore the metrics we used. You can expand to view the full plot as seen in the following image.
The MLRun API database also tracks inputs, outputs, and artifacts for each run organized by project. An example of inputs, outputs, and artifacts for each run can be seen in the following image.
For each job, we store additional details.
<table>
<thead>
<tr>
<th>Name</th>
<th>Date/Time</th>
<th>Status</th>
</tr>
</thead>
<tbody>
<tr>
<td>deploy-model</td>
<td>24 Mar, 14:56:03</td>
<td>...bcbe38e</td>
</tr>
<tr>
<td>xgb_train</td>
<td>24 Mar, 14:53:18</td>
<td>...5e95949</td>
</tr>
<tr>
<td>data-prep</td>
<td>24 Mar, 14:52:46</td>
<td>...126dc73</td>
</tr>
<tr>
<td>describe</td>
<td>24 Mar, 14:52:45</td>
<td>...c2a460e</td>
</tr>
<tr>
<td>deploy-features-function</td>
<td>24 Mar, 14:52:43</td>
<td>...50d8683</td>
</tr>
<tr>
<td>NetApp Cloud Volume_Snap</td>
<td>24 Mar, 14:51:22</td>
<td>...3108eb2</td>
</tr>
</tbody>
</table>
There is more information about MLRun than we can cover in this document. AI artifacts, including the definition of the steps and functions, can be saved to the API database, versioned, and invoked individually or as a full project. Projects can also be saved and pushed to Git for later use. We encourage you to learn more at the [MLRun GitHub site](https://github.com/Iguazio/MLRun).
**Deploy Grafana Dashboard**
After everything is deployed, we run inferences on new data. The models predict failure on network device equipment. The results of the prediction are stored in an Iguazio TimeSeries table. You can visualize the results with Grafana in the platform integrated with Iguazio’s security and data access policy.
You can deploy the dashboard by importing the provided JSON file into the Grafana interfaces in the cluster.
1. To verify that the Grafana service is running, look under Services.
**Services**
<table>
<thead>
<tr>
<th>Name</th>
<th>Running User</th>
<th>Version</th>
<th>CPU (cores)</th>
<th>Memory</th>
</tr>
</thead>
<tbody>
<tr>
<td>docker-registry</td>
<td></td>
<td>2.7.1</td>
<td>96µ</td>
<td>1.67 GB</td>
</tr>
<tr>
<td>framesd</td>
<td></td>
<td>0.5.10</td>
<td>369µ</td>
<td>795.19 MB</td>
</tr>
<tr>
<td>grafana</td>
<td></td>
<td>6.6.0</td>
<td>1m</td>
<td>38.39 MB</td>
</tr>
<tr>
<td>jupyter</td>
<td>admin</td>
<td>1.0.2</td>
<td>81µ</td>
<td>3.27 GB</td>
</tr>
<tr>
<td>log forwarder</td>
<td></td>
<td>6.7.2</td>
<td>0</td>
<td>0 bytes</td>
</tr>
</tbody>
</table>
2. If it is not present, deploy an instance from the Services section:
a. Click New Service.
b. Select Grafana from the list.
c. Accept the defaults.
d. Click Next Step.
e. Enter your user ID.
f. Click Save Service.
g. Click Apply Changes at the top.
3. To deploy the dashboard, download the file `NetopsPredictions-Dashboard.json` through the Jupyter interface.
4. Open Grafana from the Services section and import the dashboard.
5. Click Upload *.json File and select the file that you downloaded earlier (NetopsPredictions-Dashboard.json). The dashboard displays after the upload is completed.
Deploy Cleanup Function
When you generate a lot of data, it is important to keep things clean and organized. To do so, deploy the cleanup function with the `cleanup.ipynb` notebook.
Benefits
NetApp and Iguazio speed up and simplify the deployment of AI and ML applications by building in essential frameworks, such as Kubeflow, Apache Spark, and TensorFlow, along with orchestration tools like Docker and Kubernetes. By unifying the end-to-end data pipeline, NetApp and Iguazio reduce the latency and complexity inherent in many advanced computing workloads, effectively bridging the gap between development and operations. Data scientists can run queries on large datasets and securely share data and algorithmic models with authorized users during the training phase. After the containerized models are ready for production, you can easily move them from development environments to operational environments.
Conclusion
When building your own AI/ML pipelines, configuring the integration, management, security, and accessibility of the components in an architecture is a challenging task. Giving developers access and control of their environment presents another set of challenges.
The combination of NetApp and Iguazio brings these technologies together as managed services to accelerate technology adoption and improve the time to market for new AI/ML applications.
|
{"Source-Url": "https://docs.netapp.com/us-en/netapp-solutions/pdfs/sidebar/MLRun_Pipeline_with_Iguazio.pdf", "len_cl100k_base": 9075, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 49571, "total-output-tokens": 9734, "length": "2e13", "weborganizer": {"__label__adult": 0.0003116130828857422, "__label__art_design": 0.0004973411560058594, "__label__crime_law": 0.0002956390380859375, "__label__education_jobs": 0.0007433891296386719, "__label__entertainment": 0.00011163949966430664, "__label__fashion_beauty": 0.0001825094223022461, "__label__finance_business": 0.000640869140625, "__label__food_dining": 0.00032067298889160156, "__label__games": 0.0005979537963867188, "__label__hardware": 0.0015573501586914062, "__label__health": 0.00031566619873046875, "__label__history": 0.0002366304397583008, "__label__home_hobbies": 0.00011265277862548828, "__label__industrial": 0.00070953369140625, "__label__literature": 0.00021755695343017575, "__label__politics": 0.00027441978454589844, "__label__religion": 0.0004012584686279297, "__label__science_tech": 0.09478759765625, "__label__social_life": 0.00011610984802246094, "__label__software": 0.031890869140625, "__label__software_dev": 0.86474609375, "__label__sports_fitness": 0.00023365020751953125, "__label__transportation": 0.0003924369812011719, "__label__travel": 0.00015687942504882812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36056, 0.01889]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36056, 0.09303]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36056, 0.82401]], "google_gemma-3-12b-it_contains_pii": [[0, 66, false], [66, 66, null], [66, 1624, null], [1624, 2894, null], [2894, 4439, null], [4439, 5632, null], [5632, 6633, null], [6633, 8906, null], [8906, 10395, null], [10395, 12756, null], [12756, 13325, null], [13325, 14228, null], [14228, 14968, null], [14968, 16331, null], [16331, 17383, null], [17383, 19146, null], [19146, 20890, null], [20890, 21883, null], [21883, 23323, null], [23323, 25580, null], [25580, 26836, null], [26836, 27807, null], [27807, 28823, null], [28823, 30027, null], [30027, 31656, null], [31656, 31993, null], [31993, 33507, null], [33507, 34444, null], [34444, 34679, null], [34679, 36056, null], [36056, 36056, null]], "google_gemma-3-12b-it_is_public_document": [[0, 66, true], [66, 66, null], [66, 1624, null], [1624, 2894, null], [2894, 4439, null], [4439, 5632, null], [5632, 6633, null], [6633, 8906, null], [8906, 10395, null], [10395, 12756, null], [12756, 13325, null], [13325, 14228, null], [14228, 14968, null], [14968, 16331, null], [16331, 17383, null], [17383, 19146, null], [19146, 20890, null], [20890, 21883, null], [21883, 23323, null], [23323, 25580, null], [25580, 26836, null], [26836, 27807, null], [27807, 28823, null], [28823, 30027, null], [30027, 31656, null], [31656, 31993, null], [31993, 33507, null], [33507, 34444, null], [34444, 34679, null], [34679, 36056, null], [36056, 36056, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36056, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36056, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36056, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36056, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36056, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36056, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36056, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36056, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36056, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36056, null]], "pdf_page_numbers": [[0, 66, 1], [66, 66, 2], [66, 1624, 3], [1624, 2894, 4], [2894, 4439, 5], [4439, 5632, 6], [5632, 6633, 7], [6633, 8906, 8], [8906, 10395, 9], [10395, 12756, 10], [12756, 13325, 11], [13325, 14228, 12], [14228, 14968, 13], [14968, 16331, 14], [16331, 17383, 15], [17383, 19146, 16], [19146, 20890, 17], [20890, 21883, 18], [21883, 23323, 19], [23323, 25580, 20], [25580, 26836, 21], [26836, 27807, 22], [27807, 28823, 23], [28823, 30027, 24], [30027, 31656, 25], [31656, 31993, 26], [31993, 33507, 27], [33507, 34444, 28], [34444, 34679, 29], [34679, 36056, 30], [36056, 36056, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36056, 0.20896]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
cbf4aff0f82600b7fdd27e32980fe4ef16daa9fc
|
Towards a Neural Network based Reliability Prediction Model via Bugs and Changes
Camelia Șerban a and Andreea Vescan b
Department of Computer Science, Babeș-Bolyai University, M. Kogalniceanu 1, Cluj-Napoca, Romania
Keywords: Reliability, Metrics, Assessment, Prediction, Neural Network, Object-oriented Design.
Abstract: Nowadays, software systems have become larger and more complex than ever. A system failure could threaten the safety of human life. Discovering the bugs as soon as possible during the software development and investigating the effect of a change in the software system are two main concerns of the software developers to increase system’s reliability. Our approach employs a neural network to predict reliability via post-release defects and changes applied during the software development life cycle. The CK metrics are used as predictors variables, whereas the target variable is composed of both bugs and changes having different weights. This paper empirically investigates various prediction models considering different weights for the components of the target variable using five open source projects. Two major perspectives are explored: cross-project to identify the optimum weight values for bugs and changes and cross-project to discover the best training project for a selected weight. The results show that for both cross-project experiments, the best accuracy is obtained for the models with the highest weights for bugs (75% bugs and 25% changes) and that the right fitted project to be used as training is the PDE project.
1 INTRODUCTION
Software systems have become larger and more complex than ever. A minor change in one part of the system may have unexpected degradation of the software system design, leading in the end to multiple bugs and defects. The impact of unreliable software resides in critical damages, business reputation or even lost of humans life.
Software practitioners have made significant effort to achieve high reliability for the systems during testing process. Therefore, the assessment of software system is of utmost importance in order to keep track of implications that could appear after a change has been applied. The main interest is to control the software quality assurance process by predicting the failures and trigger a warning when the failure rate would have fallen below an acceptable threshold.
Considering that the internal structure of the system and also the changes applied successively influences the software reliability to a great extent, using history information from other projects, an automatic prediction of the number of defects in a software system may help developers to efficiently allocate limited resources. Our approach uses a neural network to predict reliability via post-release defects and changes applied during the software development life cycle. As independent variables in the prediction model we use CK metrics and as target variable we combine bugs (categorized by severity and priority) with changes (version, fixes, authors, codeChurn, age) using different weights.
This paper empirically investigates various prediction models using five open-source projects. The performed experiments are cross-project, two major perspectives being explored: to identify the optimum weight values for bugs and changes and to identify the optimum weight values for bugs and changes and to discover the suitable project to be used as training. The results show that for both cross-project experiments, the best model is obtained with 75% bugs and 25% changes and PDE is the proper project to be used as training.
The paper is organized as follows: Section 2 describes related work approaches and discusses our approach in relation to them. Section 3 outlines our research design, the experimental setup that we employ to address the research questions: dataset, metrics, method, and analysis. Section 4 reports the results,
mapping each experiment with the research question. Section 5 discusses about the threats to validity that can affect the results of our study. The conclusions of our paper and further research directions are outlined in Section 6.
2 RELATED WORK
Reliability is one of the most important measurements when we describe safety-critical systems. It is so important because a fail in such a system could produce life loses. This subject was of major interest in last years and several research works studied its impact on software safety, as well as methods through which we can predict and accomplish a high reliability value from the earliest development stages.
How reliability predictions can increase trust in reliability of safety-critical systems was studied in paper (Schneidewind, 1997). The author determines a prediction model for different reliability measures (remaining failure, maximum failures, total test time required to attain a given fraction of remaining failures, time to next failure), concluding that they are useful for assuring that software is safe and for determining how long to test a piece of software.
Another approach (Chitra et al., 2008) defined a classifier (with 37 software metrics) and use it to classify the software modules as fault-none or fault-prone. They compared their works with others and concluded that their model has the best performance. The approach in (Li et al., 2016) proposes a new tool named Automated Reliability Prediction System for predicting the reliability of safety-critical software. An experiment was conducted where some students used this tool. The result was that they made fewer mistakes in their analysis.
The work described in (Merseguer, 2003) tries to solve the problem of determining the error rate of the electronic parts of a track circuit system (which is a safety critical system) by using Markov chains in order to predict the reliability of the fault-tolerant system. The paper (Lou et al., 2016) proposes an approach for predicting software reliability using Relevance vector machines as kernel-based learning methods that have been adopted for regression problems.
In relation to existing approaches, ours investigates how we can use CK metrics (Chidamber and Kemerer, 1994) to predict reliability and relates to approaches (Chitra et al., 2008), (Shrikant et al., 2021), (Carleton et al., 2020), (Nayrolles and Hamou-Lhadj, 2018), with the difference that we use CK metrics instead of cyclomatic complexity, decision count, decision density, etc., and we predict a reliability value for each class in the project, instead of classifying the design classes in two categories – faulty or healthy.
Our approach investigates different weights values for post-release defects and changes using five open projects by exploring two perspectives: cross-project to discover best weights for bugs and changes, and to explore and identify the proper project to be used as training project. This has also been explored in prior studies by Geremia and Tamburri (Geremia and Tamburri, 2018) where they propose decision mechanisms to support the selection of the best defect prediction model using contextual factors of project lifetime.
3 RESEARCH DESIGN
Nowadays, when software systems are very complex applications and resources are limited, automatically predicting the number of failures in software modules helps developers and testers to efficiently allocate the resources in order to increase software system reliability.
Various aspects in the software development life cycle may be the cause of a software failure. The current paper approaches the failures identified as post-release bugs that are caused by defects from source code. Machine learning is not always suitable for bugs prediction due to the highly unbalanced data (Mahmood et al., 2015), few design entities are found to be defected in comparison with the total number of entities from the system. Thus, to cope with this limitation the current approach aims to define a metric as dependent variable of the target value used in the prediction algorithm that considers, in a certain weight, changes that are met in the source code. This metric is named “reliability via bugs and changes”. The reasoning in take into account changes registered during software development is due to the fact that these changes influence also system’s reliability.
Thus, in this paper we empirically investigate, using five open source projects, what is the best linear combination between bugs and changes that can be used as target value to define a prediction model based on Neural Network having as independent variables CK (Chidamber and Kemerer, 1994) metrics.
Therefore, we aim to introduce a cross-project approach analysis having two objectives:
- **finding the best Bugs-Changes weight:** for each considered project various Bugs-Changes weights are explored (50B50C, 25B75C, 75B25C); the analysis is conducted between any two percentage combination.
- **finding the suitable project to be considered**
as training project considering various Bugs-Changes percentages: for each percentage combinations, all projects are used as training; the analysis is conducted between all projects.
More specifically, the study aims at addressing the following research questions:
**RQ1**: What is the best Bugs-Changes percentages to be used 50B50C, 25B75C, 75B25C for defect prediction?
**RQ2**: What is the proper project to be used for training a reliability prediction model?
Figure 1 presents the overview of our approach, graphically representing the training Project A and the testing Project B, the structure of the Neural Network, the input layer (CK metrics) and the prediction models with various weights for reliability by bugs and changes.
The rest of the section details the experimental design we employed to address the research questions above.
### 3.1 Dataset
The dataset used in our investigation is public available \(^{1}\). This dataset includes: JDTCore \(^{2}\) incremental java compiler, PDE/UI \(^{3}\) Eclipse plug-in development tool, Equinox \(^{4}\) implantation framework for OSGI core components; Lucene \(^{5}\) java based search technology.
These software systems have been extensively studied in research literature of bug-prediction (D’Ambros et al., 2010). The reason for using these dataset, beyond its public availability, is due to the fact that it contains bugs and changes log.
The granularity level performed by the analysis is class design entity. Therefore, for each class of the last version of the system the dataset provide information regarding Chidamber and Kemer metrics values (Chidamber and Kemerer, 1994), the number of bugs (trivial, major, critical, high priority) categorized by severity and priority and the changes applied during the system development.
In Table 1 the number of classes in each project and the number of bugs may be visualized \(#C=\text{number of total classes}, \#CB=\text{number of classes with Bugs}, \#B=\text{number of Bugs}, \#NTB=\text{number of Non Trivial Bugs}, \#MB=\text{number of Major Bugs}, \#C=\text{number of Critical Bugs}, \#HPB=\text{number of High Priority Bugs}\)., and Bugs - number of bugs that were not categorized.
<table>
<thead>
<tr>
<th></th>
<th>JC</th>
<th>RCB</th>
<th>HB</th>
<th>RNTB</th>
<th>#MB</th>
<th>#CB</th>
<th>#HPB</th>
</tr>
</thead>
<tbody>
<tr>
<td>JDT</td>
<td>997</td>
<td>206</td>
<td>374</td>
<td>17</td>
<td>35</td>
<td>10</td>
<td>3</td>
</tr>
<tr>
<td>PDE</td>
<td>1497</td>
<td>209</td>
<td>344</td>
<td>14</td>
<td>57</td>
<td>6</td>
<td>0</td>
</tr>
<tr>
<td>EQ</td>
<td>324</td>
<td>129</td>
<td>244</td>
<td>3</td>
<td>4</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>LU</td>
<td>691</td>
<td>64</td>
<td>97</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>MY</td>
<td>1862</td>
<td>245</td>
<td>340</td>
<td>187</td>
<td>18</td>
<td>3</td>
<td>36</td>
</tr>
</tbody>
</table>
The characteristics used in our investigation related to the considered projects are: UI, Framework, Indexing and search technology, Plug-in management and Task management. We mention next for each project two characteristics: JDT (UI, IndexSearch), PDE (UI, PlugIn), Equinox (UI, Framework), Lucene (UI, IndexSearch), Mylyn (UI, Task).
### 3.2 Metrics
The current section provides some details regarding the metrics used in the proposed neural network model to predict reliability. As independent variables in the prediction model we use CK metrics and as target variable we combine bugs (categorized by severity and priority) with changes (version, fixes, authors, codeChurn, age) using different weights.
#### 3.2.1 CK Metrics
The metrics selected as independent variables for the proposed reliability prediction model based on bugs and changes are the CK (Chidamber and Kemerer, 1994) metrics suite: Depth of Inheritance Tree (DIT), Weighted Methods per Class (WMC), Coupling Between Objects (CBO), Response for a Class (RFC), Lack of Cohesion in Methods (LCOM), Number of children of a class (NOC). The definitions of the these metrics are briefly presented in what follows:
- **Depth of Inheritance Tree (DIT)** is defined as the length of the longest path of inheritance from a given class to the root of the tree;
- **Weighted Methods per Class (WMC)** metric defined as the sum of the complexity of all methods of a given class. The complexity of a method is the cyclomatic complexity;
- **Coupling Between Objects (CBO)** for a class c is the number of other classes that are coupled to the class c, namely that Two classes are coupled when
---
\(^{1}\)http://bug.inf.usi.ch/index.php
\(^{2}\)https://www.eclipse.org/jdt/core/index.php
\(^{3}\)https://www.eclipse.org/pde/pde-ui/
\(^{4}\)https://projects.eclipse.org/projects/eclipse.equinox
\(^{5}\)http://lucene.apache.org/
methods declared in one class use methods or instance variables defined by the other class;
- **Response for a Class (RFC)** metric is defined as the total number of methods that can be invoked from that class;
- **Lack of Cohesion in Methods (LCOM)** is defined by the difference between the number of method pairs using common instance variables and the number of method pairs that do not use any common variables.
- **Number of children (NOC)** of a class is the number of immediate sub-classes subordinated to a class in the class hierarchy. Theoretical basis of NOC metric relates to the notion of scope of properties. It is a measure of how many sub-classes are going to inherit the methods of the parent class.
One of the reason in selecting these metrics is due to the fact that they are linked to four internal characteristics that are essential to object-orientation: i.e. coupling, inheritance, cohesion and structural complexity (Marinescu, 2002). At the same time they were validated as being good predictors for software quality. Tang et al. (Tang et al., 1999) validated CK’s metrics using real-time systems and the results suggested that WMC can be a good indicator for faulty classes. Li (Li, 1998) theoretically validated CK’s metrics using a metric-evaluation framework proposed by Kitchenham et al. (Kitchenham et al., 1995). Thus, metrics considered for our study have been selected based on these arguments.
Thus, the goal of this study is to explore the relationship between object-oriented metrics and reliability at the class level. To attain this, a target metric for reliability is needed. In the following sections we aim to define this metric as an aggregated measure of two components: the first component takes bugs into account and the second component considers changes during the software development life cycle.
### 3.2.2 Bugs Metrics
The bugs described by our used dataset are grouped in four categories considering their severity and priority. Thus, the following types of bugs were reported by a bug tracking system:
- **#HighPriorityBugs (#HPB)** - number of bugs considered to be a priority;
- **#NonTrivialBugs (#NTB)** - number of bugs being non-trivial.
• \#\textbf{MajorBugs (MB)} - number of bugs having a major importance;
• \#\textbf{CriticalBugs (CB)} - number of bugs considered to be critical;
• \#\textbf{Bugs} - number of bugs that were not categorized.
Our goal is to use these categories of bugs in order to define an aggregate metric that will be used as a target value for reliability prediction. We establish weights for each of the above four categories of bugs considering an order relation that establishes a priority in solving these faults/bugs. Therefore, for bugs having “high priority” and for those being “major” and “critical” we assigned a weigh value equals with the value 25, whereas for “non-trivial” bugs and not categorized bugs we assigned the weights values equals with 15 and 10 respectively.
The Equation 1 defines the target value component for reliability prediction having the granularity level the “class” entity from object-oriented design model. The values of these metrics are collected during testing, operation and maintenance in order to conclude about the reliability of the system. The weight values defined for the metrics come from empirical observations.
\[
\text{BugsTarget} = (0.25 \cdot \#\text{HPB} + 0.15 \cdot \#\text{NTB} + 0.25 \cdot \#\text{MB} + 0.25 \cdot \#\text{CB} + 0.10 \cdot \#\text{Bugs}). \tag{1}
\]
### 3.2.3 Change Metrics
To quantify the changes we use the catalog of four level process metrics introduced by Moser et al. (Moser et al., 2008):
- \#\text{fixies (NFIX)}
- \#\text{authors (NAUTH)}
- \#\text{revisions (NR)}
- \text{Code Churn (CHURN)}
- \text{Age (in number of weeks)}
- \text{Weighted age (AGE) for a class}
The granularity of the level is “class” considering design entities from the object-oriented design model.
All the above metrics are used to define a measure for reliability via changes as a linear combination having the weights values equals to the value 0.2. See Equation 2.
\[
\text{ChangesTarget} = (0.20 \cdot \text{NR} + 0.20 \cdot \text{NFIX} + 0.20 \cdot \text{NAUTH} + 0.20 \cdot \text{CHURN} + 0.20 \cdot \text{AGE}). \tag{2}
\]
**Remark:** Current investigation considered equal importance of each element, future investigation will vary the importance of the elements.
### 3.2.4 Target Reliability Prediction Metric
As mentioned earlier, to cope with unbalanced data, we also employed the changes performed in the source code in our defect-based model. The used dataset (D’Ambros et al., 2010) contains also historical data, such as versions, fixes and authors, refactoring made, data that could be used further in the reliability estimation model. Thus, our experiments investigate an aggregated metric used as dependent variable, target value, for predicting reliability based on two components each of them having into account several aspects as discussed in previous sections. These components are bugs and changes having different assigned weights. Equation 3 describes the target value for reliability prediction via bugs and changes aspects. The proposed validation model explores several values for the value of \(\alpha\) weight from this equation.
\[
\text{Reliability} = \alpha \cdot \text{BugsTarget} + (1 - \alpha) \cdot \text{ChangesTarget} \tag{3}
\]
### 3.3 Applied Method
In order to predict the reliability, a feed-forward neural network (Russel and Norvig, 1995) with back-propagation learning is used, with the following structure: six nodes on the input layer (one for each considered metric), one node on the output layer and two hidden layers, each of them having five nodes. Each node uses the bipolar sigmoid activation function.
**Remark:** The current investigation considered only this type of neural network, the focus being to study various percentages for bugs and change metrics. Future investigation will vary the architecture of the neural network.
The CK metrics mentioned above are chosen to be the input vector of the neural network. The reliability metric computed using bugs and changes is chosen to be the expected output vector. We have applied Min-Max normalization to all considered metrics (WMC, RFC, NOC, LCOM, DIT, CBO) and also to the bugs and changes - based reliability metric definition.
The termination condition of training is either the error to be less or equal then 0.001 or the number of epochs to be at most 10000. After training this neural network, we obtained a neural network model for reliability prediction.
We performed cross-project validation with two viewpoints: to discover the best percentages for Bugs and Changes and to discover for a given project what is the proper project used as training.
We thus have 15 prediction models (5 projects, each with 3 different percentages combinations):
JDT-training, PDE-training, EQ-training, LU-training, and MY-training, each with 50B50C-25B75C, 50B50C-75B25C, 25B75C-75B25C. For each experiment the other four projects were considered for testing phase of that specific prediction model, for example the PDE, EQ, LU, MY projects were used for the JDT-based training prediction model. For example, in Figure 1 one prediction model considers as “Project A” the JDT project and “Project B” is in turn changed with PDE, then EQ, then LU, and finally MY.
3.4 Analysis and Metrics Used to Compare the Models
3.4.1 Wilcoxon Signed-Rank Test
The Wilcoxon signed ranks test (Derrac et al., 2011) is used to answer the following question: do two samples represent two different populations? It is a non-parametric procedure employed in hypothesis testing situations, involving a design with two samples. It is a pairwise test that aims to detect significant differences between two sample means, that is, the behavior of two algorithms.
We applied the Wilcoxon signed ranks test for each train-based project, comparing the difference between the target and predicted values between any two percentage configurations for Bugs and Changes.
3.4.2 Root Mean Squared Error Metric
The standard deviation of residuals or The Root Mean Squared Error (RMSE) metric is used to validate our model. RMSE is a quadratic scoring rule that also measures the average magnitude of the error. It's the square root of the average of squared differences between prediction and actual observation. RMSE is the standard deviation of the residuals (prediction errors). Residuals are a measure of how far from the regression line data points are; RMSE is a measure of how spread out these residuals are. In other words, it tells you how concentrated the data is around the line of best fit.
4 RESULTS
We outline in what follows the results obtained for the two viewpoints: discover the best percentages between Bugs and Changes and discover the proper project to be used as training.
4.1 Results for Finding the Best Bugs-Changes Weights
The obtained results for discovering the best Bugs-Changes percentages 50B50C, 25B75C, 75B25C are provided in Table 2, the RMSE values are provides for the set of 15 models (5 projects as training, each with 3 different Bugs-Changes percentages). Inspecting the results in the table we notice that for each project considered as training, the validation for all the other projects obtained better results for the 75B25C weights, except for the EQ training with MY validation (50B50C). We colored the background in gray for the best results in Table 2.
Thus, we can better predict reliability using CK metrics with the percentages 75% Bugs and 25% Changes.
Table 2: Best percetages using JDT as basic training - RMSE values.
<table>
<thead>
<tr>
<th>JDT-training</th>
<th>RMSE values using JDT</th>
<th>50B50C</th>
<th>25B75C</th>
<th>75B25C</th>
</tr>
</thead>
<tbody>
<tr>
<td>JDT</td>
<td>0.1867131065</td>
<td>0.19179037</td>
<td>0.128105326</td>
<td></td>
</tr>
<tr>
<td>PDE</td>
<td>0.167290821</td>
<td>0.240823187</td>
<td>0.111907291</td>
<td></td>
</tr>
<tr>
<td>EQ</td>
<td>0.197666134</td>
<td>0.22655712</td>
<td>0.132700994</td>
<td></td>
</tr>
<tr>
<td>LU</td>
<td>0.140745709</td>
<td>0.157392999</td>
<td>0.129820081</td>
<td></td>
</tr>
<tr>
<td>MY</td>
<td>0.1738218694</td>
<td>0.225929332</td>
<td>0.113529354</td>
<td></td>
</tr>
<tr>
<td>PDE-training</td>
<td>RMSE values using PDE</td>
<td>50B50C</td>
<td>25B75C</td>
<td>75B25C</td>
</tr>
<tr>
<td>JDT</td>
<td>0.191519539</td>
<td>0.22859332</td>
<td>0.113529354</td>
<td></td>
</tr>
<tr>
<td>PDE</td>
<td>0.098123535</td>
<td>0.212072039</td>
<td>0.071749605</td>
<td></td>
</tr>
<tr>
<td>EQ</td>
<td>0.090000284</td>
<td>0.1290959219</td>
<td>0.071018641</td>
<td></td>
</tr>
<tr>
<td>LU</td>
<td>0.167899293</td>
<td>0.152016918</td>
<td>0.065734533</td>
<td></td>
</tr>
<tr>
<td>MY</td>
<td>0.168926548</td>
<td>0.169920985</td>
<td>0.17977177</td>
<td></td>
</tr>
<tr>
<td>LU-training</td>
<td>RMSE values using LU</td>
<td>50B50C</td>
<td>25B75C</td>
<td>75B25C</td>
</tr>
<tr>
<td>JDT</td>
<td>0.201938201</td>
<td>0.221695527</td>
<td>0.144877536</td>
<td></td>
</tr>
<tr>
<td>PDE</td>
<td>0.12452023</td>
<td>0.155362424</td>
<td>0.108192312</td>
<td></td>
</tr>
<tr>
<td>EQ</td>
<td>0.12821925</td>
<td>0.14952299</td>
<td>0.120341372</td>
<td></td>
</tr>
<tr>
<td>MY</td>
<td>0.16638368</td>
<td>0.169920985</td>
<td>0.17977177</td>
<td></td>
</tr>
<tr>
<td>MY-training</td>
<td>RMSE values using MY</td>
<td>50B50C</td>
<td>25B75C</td>
<td>75B25C</td>
</tr>
<tr>
<td>JDT</td>
<td>0.203440039</td>
<td>0.248372703</td>
<td>0.161926514</td>
<td></td>
</tr>
<tr>
<td>PDE</td>
<td>0.075215563</td>
<td>0.106751237</td>
<td>0.058844177</td>
<td></td>
</tr>
<tr>
<td>EQ</td>
<td>0.116125632</td>
<td>0.194066449</td>
<td>0.087998055</td>
<td></td>
</tr>
<tr>
<td>MY</td>
<td>0.178250872</td>
<td>0.1855377</td>
<td>0.080808523</td>
<td></td>
</tr>
</tbody>
</table>
Another conducted analysis investigated if there is a difference between the various considered percentages 50B50C, 25B75C, 75B25C using the Wilcoxon signed ranks test. The results are provided in Table 3 that contains the results of the p-value.
We can notice that for most of the projects and comparisons there are a significant difference between 50B50C-25B75C, 50B50C-75B25C, 25B75C-75B25C. We colored the background in gray the cells
Table 3: Best percentages using All projects as basic training - p-values for the Wilcoxon Test.
<table>
<thead>
<tr>
<th>JDT-training</th>
<th>Best percentages using JDT</th>
<th>50B50C-25B75C</th>
<th>50B50C-75B25C</th>
<th>25B75C-75B25C</th>
</tr>
</thead>
<tbody>
<tr>
<td>PDE</td>
<td>0.6737173672</td>
<td>5.124755E-76</td>
<td>6.223E-127</td>
<td></td>
</tr>
<tr>
<td>EQ</td>
<td>2.84350E-07</td>
<td>5.2123E-64</td>
<td>8.903E-110</td>
<td></td>
</tr>
<tr>
<td>LU</td>
<td>6.17709E-80</td>
<td>6.740E-125</td>
<td>4.9031E-131</td>
<td></td>
</tr>
<tr>
<td>MY</td>
<td>1.32409E-56</td>
<td>4.9155E-52</td>
<td>2.52249E-39</td>
<td></td>
</tr>
<tr>
<td>PDE-training</td>
<td>Best percentages using PDE</td>
<td>50B50C-25B75C</td>
<td>50B50C-75B25C</td>
<td>25B75C-75B25C</td>
</tr>
<tr>
<td>EQ</td>
<td>0.310320414</td>
<td>0.04673915</td>
<td>4.0491E-16</td>
<td></td>
</tr>
<tr>
<td>LU</td>
<td>1.2529E-148</td>
<td>6.8599E-140</td>
<td>8.7227E-67</td>
<td></td>
</tr>
<tr>
<td>MY</td>
<td>4.0385E7E-07</td>
<td>2.901E-104</td>
<td>1.432E-180</td>
<td></td>
</tr>
<tr>
<td>EQ-training</td>
<td>Best percentages using EQ</td>
<td>50B50C-25B75C</td>
<td>50B50C-75B25C</td>
<td>25B75C-75B25C</td>
</tr>
<tr>
<td>JDT</td>
<td>3.3873E7E-07</td>
<td>3.9405E-30</td>
<td>7.104E-42</td>
<td></td>
</tr>
<tr>
<td>PDE</td>
<td>2.3138E6E-12</td>
<td>2.8319E-05</td>
<td>0.846571E12</td>
<td></td>
</tr>
<tr>
<td>LU</td>
<td>0.326247909</td>
<td>0.240985176</td>
<td>0.110562683</td>
<td></td>
</tr>
<tr>
<td>MY</td>
<td>0.820209572</td>
<td>2.3179E-14</td>
<td>1.85178E-27</td>
<td></td>
</tr>
<tr>
<td>LU-training</td>
<td>Best percentages using LU</td>
<td>50B50C-25B75C</td>
<td>50B50C-75B25C</td>
<td>25B75C-75B25C</td>
</tr>
<tr>
<td>JDT</td>
<td>9.5954E6E-64</td>
<td>2.3745E-92</td>
<td>1.0711E+96</td>
<td></td>
</tr>
<tr>
<td>PDE</td>
<td>7.2181E-94</td>
<td>4.6747E-74</td>
<td>1.6209E+23</td>
<td></td>
</tr>
<tr>
<td>EQ</td>
<td>1.0368E-12</td>
<td>3.1702E-21</td>
<td>3.84541E-27</td>
<td></td>
</tr>
<tr>
<td>MY</td>
<td>2.6577E5E-39</td>
<td>5.515E-103</td>
<td>1.5353E-110</td>
<td></td>
</tr>
<tr>
<td>MY-training</td>
<td>Best percentages using MY</td>
<td>50B50C-25B75C</td>
<td>50B50C-75B25C</td>
<td>25B75C-75B25C</td>
</tr>
<tr>
<td>JDT</td>
<td>3.8470E-06E-66</td>
<td>4.8306E-66</td>
<td>4.8006E-14</td>
<td></td>
</tr>
<tr>
<td>LU</td>
<td>2.0122E-124</td>
<td>2.813E-211</td>
<td>1.5807E-219</td>
<td></td>
</tr>
<tr>
<td>MY</td>
<td>2.2716E-201</td>
<td>1.635E-274</td>
<td>1.1776E-291</td>
<td></td>
</tr>
</tbody>
</table>
where the p value is ≤ 0.05. There are cases in which the value of p is ≤ 0.01: JDT-training with PDE validating and PDE validating with EQ validating for 50B50C-25B75C. EQ-training with PDE validating with 25B75C-75B25C, with LU validating for all configurations and with MY validating for the 50B50C-25B75C configuration.
In summary, with respect to our RQ1, namely What is the best Bugs-Changes percentages to be used 50B50C, 25B75C, 75B25C for defect prediction?, we elaborate the following response:
**The experiments using cross-project prediction models identified that the best prediction is obtained using the PDE training.**
**5 THREATS TO VALIDITY**
The reliability prediction approach, as every experimental analysis, may suffer from some threats to validity that can affect the results of our study.
**Threats to internal validity** refer to the subjectivity introduced in setting the weights for the reliability estimation equation. To minimize threats to internal validity, we considered various weights for bugs and changes: 50B50C, 25B75C and 75B25C. Also, the use of bugs to predict reliability could be considered too simplistic, due to the fact that reliable software systems are not achieved simply by removing bugs. However, our approach does not consider all aspects of reliability but only those aspects related to bugs and those ones that influence the increasing in number of bugs, i.e., changes applied during the system development.
**Threats to external validity** are related to the gen-
eralization of the obtained results. Only five open-source projects were considered for evaluation, written in the same programming language (Java) and considering a single version. However, we performed cross-project validation, for each prediction model we used four other projects to validate it.
6 CONCLUSION AND FUTURE WORK
Reliability of a system is investigated in this paper via bugs and changes. Our approach exploits a neural network model to predict reliability considering two relevant aspects: post-release defects and changes applied during the software development life cycle. The CK metrics are used as independent variables in the prediction model.
Five open-source projects are used to design the experiments, two major perspectives being explored, both using cross-project experiments: to identify the optimum weight values for bugs and changes and to discover the proper project used for training.
The results show that for both cross-project experiments, the best accuracy is obtained for the models with the highest weights for the bugs, thus 75B2SC and that the appropriate project to be used as training is the PDE project.
As one of our future work, we aim to extend the proposed model for reliability prediction and to better emphasize its applicability through more case studies. At the same time, further investigation on how to empirically determine the metric weights will be considered.
REFERENCES
|
{"Source-Url": "https://www.scitepress.org/Papers/2021/106007/106007.pdf", "len_cl100k_base": 8250, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 28466, "total-output-tokens": 9832, "length": "2e13", "weborganizer": {"__label__adult": 0.00042510032653808594, "__label__art_design": 0.00035953521728515625, "__label__crime_law": 0.00033283233642578125, "__label__education_jobs": 0.0010995864868164062, "__label__entertainment": 6.115436553955078e-05, "__label__fashion_beauty": 0.0002009868621826172, "__label__finance_business": 0.0002689361572265625, "__label__food_dining": 0.00032806396484375, "__label__games": 0.0006909370422363281, "__label__hardware": 0.0008425712585449219, "__label__health": 0.0006222724914550781, "__label__history": 0.0002446174621582031, "__label__home_hobbies": 9.91225242614746e-05, "__label__industrial": 0.0003962516784667969, "__label__literature": 0.0003185272216796875, "__label__politics": 0.00018930435180664065, "__label__religion": 0.00045943260192871094, "__label__science_tech": 0.020660400390625, "__label__social_life": 0.00010693073272705078, "__label__software": 0.00455474853515625, "__label__software_dev": 0.966796875, "__label__sports_fitness": 0.0003440380096435547, "__label__transportation": 0.0005040168762207031, "__label__travel": 0.00020492076873779297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34362, 0.10205]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34362, 0.24381]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34362, 0.87027]], "google_gemma-3-12b-it_contains_pii": [[0, 3930, false], [3930, 8976, null], [8976, 13480, null], [13480, 15685, null], [15685, 20419, null], [20419, 25236, null], [25236, 29259, null], [29259, 34362, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3930, true], [3930, 8976, null], [8976, 13480, null], [13480, 15685, null], [15685, 20419, null], [20419, 25236, null], [25236, 29259, null], [29259, 34362, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34362, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34362, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34362, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34362, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34362, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34362, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34362, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34362, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34362, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34362, null]], "pdf_page_numbers": [[0, 3930, 1], [3930, 8976, 2], [8976, 13480, 3], [13480, 15685, 4], [15685, 20419, 5], [20419, 25236, 6], [25236, 29259, 7], [29259, 34362, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34362, 0.28]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
b6578bb11538a70ffaa040f9c7ad4db0b1426a36
|
On Runtime Enforcement via Suppressions
Luca Aceto
Gran Sasso Science Institute, L’Aquila, Italy; and
Reykjavik University, Reykjavik, Iceland
lucac05@gsisi.it
Ian Cassar
Reykjavik University, Reykjavik Iceland; and
University of Malta, Msida, Malta
ianc@ru.is
Adrian Francalanza
University of Malta, Msida, Malta
adrian.francalanza@um.edu.mt
Anna Ingólfsdóttir
Reykjavik University, Reykjavik, Iceland
annai@ru.is
Abstract
Runtime enforcement is a dynamic analysis technique that uses monitors to enforce the behaviour specified by some correctness property on an executing system. The enforceability of a logic captures the extent to which the properties expressible via the logic can be enforced at runtime. We study the enforceability of Hennessy-Milner Logic with Recursion (µHML) with respect to suppression enforcement. We develop an operational framework for enforcement which we then use to formalise when a monitor enforces a µHML property. We also show that the safety syntactic fragment of the logic, sHML, is enforceable by providing an automated synthesis function that generates correct suppression monitors from sHML formulas.
2012 ACM Subject Classification Theory of computation → Logic and verification, Software and its engineering → Software verification, Software and its engineering → Dynamic analysis
Keywords and phrases Enforceability, Suppression Enforcement, Monitor Synthesis, Logic
Digital Object Identifier 10.4230/LIPIcs.CONCUR.2018.34
Acknowledgements The research work disclosed in this publication is partially supported by the projects “Developing Theoretical Foundations for Runtime Enforcement” (184776-051) and “TheoFoMon: Theoretical Foundations for Monitorability” (163406-051) of the Icelandic Research Fund, and by the Endeavour Scholarship Scheme (Malta), part-financed by the European Social Fund (ESF) – Operational Programme II – Cohesion Policy 2014-2020.
1 Introduction
Runtime monitoring [22, 24] is a dynamic analysis technique that is becoming increasingly popular in the turbid world of software development. It uses code units called monitors to aggregate system information, compare system execution against correctness specifications, or steer the execution of the observed system. The technique has been used effectively to offload certain verification tasks to a post-deployment phase, thus complementing other
On Runtime Enforcement via Suppressions
(static) analysis techniques in multi-pronged verification strategies – see e.g., [6, 12, 27, 18, 28]. Runtime enforcement (RE) [33, 34, 21] is a specialized monitoring technique, used to ensure that the behaviour of a system-under-scrutiny (SuS) is always in agreement with some correctness specification. It employs a specific kind of monitor (referred to as a transducer [9, 42, 4] or an edit-automaton [33, 34]) to anticipate incorrect behaviour and counter it. Such a monitor thus acts as a proxy between the SuS and the surrounding environment interacting with it, encapsulating the system to form a composite (monitored) system: at runtime, the monitor transforms any incorrect executions exhibited by the SuS into correct ones by either suppressing, inserting or replacing events on behalf of the system.
We extend a recent line of research [25, 24, 2, 1] and study RE approaches that adopt a separation of concerns between the correctness specification, describing what properties the SuS should satisfy, and the monitor, describing how to enforce these properties on the SuS. Our work considers system properties expressed in terms of the process logic $\mu$HML [30, 32], and explores what properties can be operationally enforced by monitors that can suppress system behaviour. A central element for the realisation of such an approach is the synthesis function: it automates the translation from the declarative $\mu$HML specifications to algorithmic descriptions formulated as executable monitors. Since analysis tools ought to form part of the trusted computing base, enforcement monitoring should be, in and of itself, correct. However, it is unclear what is to be expected of the synthesised monitor to adequately enforce a $\mu$HML formula. Nor is it clear for which type of specifications should this approach be expected to work effectively – it has been well established that a number of properties are not monitorable [15, 40, 16, 25, 2] and it is therefore reasonable to expect similar limits in the case of enforceability [19]. We therefore study the relationship between $\mu$HML specifications and suppression monitors for enforcement, which allows us to address the above-mentioned concerns and make the following contributions:
**Modelling:** We develop a general framework for enforcement instrumentation that is parametrisable by any system behaviour that is expressed via labelled transitions, and can express suppression, insertion and replacement enforcement, Figure 2.
**Correctness:** We give formal definitions for asserting when a monitor correctly enforces a formula defined over labelled transition systems, Definitions 3 and 8. These definitions are parametrisable with respect to an instrumentation relation, an instance of which is our enforcement framework of Figure 2.
**Expressiveness:** We provide enforceability results, Theorems 14 and 18 (but also Proposition 24), by identifying a subset of $\mu$HML formulas that can be (correctly) enforced by suppression monitors.
As a by-product of this study, we also develop a formally-proven correct synthesis function, Definition 12, that then can be used for tool construction, along the lines of [8, 7].
The setup selected for our study serves a number of purposes. For starters, the chosen logic, $\mu$HML, is a branching-time logic that allows us to investigate enforceability for properties describing computation graphs. Second, the use of a highly expressive logic allows us to achieve a good degree of generality for our results, and so, by working in relation to logics like $\mu$HML (a reformulation of the $\mu$-calculus), our work would also apply to other widely used logics (such as LTL and CTL [17]) that are embedded within this logic. Third, since the logic is verification-technique agnostic, it fits better with the realities of software verification in the present world, where a variety of techniques (e.g., model-checking and testing) straddling both pre- and post-deployment phases are used. In such cases, knowing which properties can be verified statically and which ones can be monitored for and enforced at runtime is crucial for devising effective multi-pronged verification strategies. Equipped
with such knowledge, one could also employ standard techniques [36, 5, 31] to decompose a non-enforceable property into a collection of smaller properties, a subset of which can then be enforced at runtime.
Structure of the paper. Section 2 revisits labelled transition systems and our touchstone logic, μHML. The operational model for enforcement monitors and instrumentation is given in Section 3. In Section 4 we formalise the interdependent notions of correct enforcement and enforceability. These act as a foundation for the development of a synthesis function in Section 5, that produces correct-by-construction monitors. In Section 6 we consider alternative definitions for enforceability for logics with a specific additional interpretation, and show that our proposed synthesis function is still correct with respect to the new definition. Section 7 concludes and discusses related work.
2 Preliminaries
The Model. We assume systems described as labelled transition systems (LTSs), triples (Sys, Act ∪ {τ}, →) consisting of a set of system states, s, r, q ∈ Sys, a set of observable actions, α, β ∈ Act, and a distinguished silent action τ /∈ Act (where µ ∈ Act ∪ {τ}), and a transition relation, → ⊆ (Sys × Act ∪ {τ} × Sys). We write s ⊢→ τr in lieu of (s, µ, r) ∈ →, and use s ⊢→ s′ to denote weak transitions representing s(→)∗. s ⊢→ (→)∗s′. We refer to s′ as a µ-derivative of s. Traces, t, u ∈ Act∗ range over (finite) sequences of observable actions, and we write s ⊢→ r to denote a sequence of weak transitions s ⊢→ t = α₁, . . . , αₙ. We also assume the classic notion of strong bisimilarity [39, 43] for our model, s ∼ r, using it as our touchstone system equivalence. The syntax of the regular fragment of CCS [39] is occasionally used to concisely describe LTSs in our examples.
The Logic. We consider a slightly generalised version of μHML [32, 3] that uses symbolic actions of the form [p, c]. Patterns, p, abstract over actions using data variables d, e, f ∈ Var; in a pattern, they may either occur free, d, or as binders, (d) where a closed pattern is one without any free variables. We assume a (partial) matching function for closed patterns match(p, α) that returns a substitution σ (when successful) mapping variables in p to the corresponding values in α, i.e., if we instantiate every bound variable d in p with
The filtering condition, $c$, contains variables found in $p$ and evaluates wrt. the substitutions returned by successful matches. Put differently, a closed symbolic action $[p, c]$ is one where $p$ is closed and $fv(c) \subseteq bv(p)$; it denotes the set of actions $[[p, c]] = \{ \alpha | \exists \sigma . match(p, \alpha) = c \textrm{ and } c\sigma \downarrow true \}$ and allows more adequate reasoning about LTSs with infinite actions (e.g., actions carrying data from infinite domains).
The logic syntax is given in Figure 1 and assumes a countable set of logical variables $X, Y \in LVar$. Apart from standard logical constructs such as conjunctions and disjunctions ($\bigwedge_{i \in I} \varphi_i$) describes a compound conjunction, $\varphi_1 \land \ldots \land \varphi_n$, where $I = \{1, \ldots, n\}$ is a finite set of indices, and similarly for disjunctions), and the characteristic greatest and least fixpoints (max $X.\varphi$ and min $X.\varphi$ bind free occurrences of $X$ in $\varphi$), the logic uses necessity and possibility modal operators with symbolic actions, $[[p, c]]\varphi$ and $[[p, c]]\varphi$, where $bv(p)$ bind free data variables in $c$ and $\varphi$. Formulas in $\mu$HML are interpreted over the system powerset domain where $S \subseteq P(Sys)$. The semantic definition of Figure 1, $[[\varphi, \rho]]$, is given for both open and closed formulas. It employs a valuation from logical variables to sets of states, $\rho : (LVAR \rightarrow P(Sys))$, which permits an inductive definition on the structure of the formulas; $\rho' = \rho|X \rightarrow S$ denotes a valuation where $\rho'(X) = S$ and $\rho'(Y) = \rho(Y)$ for all other $Y \neq X$. The only non-standard cases are those for the modal formulas, due to the use of symbolic actions. Note that we recover the standard logic for symbolic actions $[p, c]$ whose pattern $p$ does not contain variables ($p = \alpha$ for some $\alpha$) and whose condition holds trivially ($c = true$); in such cases we write $[\alpha]\varphi$ and $[\alpha]\varphi$ for short. We generally assume closed formulas, i.e., without free logical and data variables, and write $[\varphi]$ in lieu of $[[\varphi, \rho]]$ since the interpretation of a closed $\varphi$ is independent of $\rho$. A system $s$ satisfies formula $\varphi$ whenever $s \in [\varphi]$ whereas a formula $\varphi$ is satisfiable, $\varphi \in SAT$, whenever there exists a system $r$ such that $r \in [\varphi]$.
**Example 1.** Consider two systems ($a$ good system, $s_g$, and a bad one, $s_b$) implementing a server that interacts on port $i$, repeatedly accepting requests that are answered by outputting on the same port, and terminating the service once a close request is accepted (on the same port). Whereas $s_g$ outputs an answer $(i!ans)$ for every request $(i?req)$, $s_b$ occasionally refuses to answer a given request (see the underlined branch). Both systems terminate with $i?cls$.
$$s_g = \text{rec } x. (i?req \vdash i!ans \cdot x + i?cls \cdot \text{nil}) \quad s_b = \text{rec } x. (i?req \vdash i!ans \cdot x + i?req \cdot x + i?cls \cdot \text{nil})$$
We can specify that two consecutive requests on port $i$ indicate invalid behaviour via the $\mu$HML formula $\varphi_0 = \text{max } X.[(i?req) \land \neg ((i!ans) \land i?req) \land \text{ff}]$, it defines an invariant property (max $X.(\cdots)$) requiring that whenever a system interacting on $i$ inputs a request, it cannot input a subsequent request, i.e., $(i?req)\land \neg ((i!ans)\land i?req)\land \text{ff}$, unless it outputs an answer beforehand, in which case the formula recurses, i.e., $(i!ans)\land X$. Using symbolic actions, we can generalise $\varphi_0$ by requiring the property to hold for any interaction happening on any port number except $j$.
$$\varphi_1 = \text{max } X.[(d?req \land d\neq j) \land ((d!ans \land \text{true}) \land X \land [d?req \land \text{true}]) \land \text{ff}]$$
In $\varphi_1$, $(d)?req$ binds the free occurrences of $d$ found in $d\neq j$ and $[(d!ans \land \text{true}) \land X \land [d?req \land \text{true}]) \land \text{ff}$. Using Figure 1, one can check that $s_g \in [\varphi_1]$, whereas $s_b \not\in [\varphi_1]$ since $s_b \xrightarrow{\text{i?req}} \cdots \xrightarrow{\text{i?req}} \cdots$
### 3 An Operational Model for Enforcement
Our operational mechanism for enforcing properties over systems uses the (symbolic) transducers $m, n \in Tm$ defined in Figure 2. The transition rules in Figure 2 assume closed terms, i.e., for every symbolic-prefix transducer, $\{p, c, p'\}, m, p$ is closed and $(fv(c) \cup fv(p') \cup fv(m)) \subseteq bv(p)$, and yield an LTS with labels of the form $\gamma \cdot \mu$, where $\gamma \in (\text{Act} \cup \{\cdot\})$. Our syntax
Syntax
\[ m, n \in \text{Trn} ::= \text{id} \mid \{p, c, p\}, m \mid \sum_{i \in I} m_i \mid \text{rec } x.m \mid x \]
Dynamics
\[
\begin{align*}
\text{eID} & \quad \mu \text{id} \xrightarrow{\mu, \mu} \text{id} \\
\text{eSel} & \quad m_j \xrightarrow{\gamma, \mu} n_j \quad j \in I \\
\text{eRec} & \quad m \{\text{rec } x.m/x\} \xrightarrow{\gamma, \mu} n \\
\text{eTRN} & \quad \text{mtch}(p, \gamma) = \sigma \quad e \sigma \downarrow \text{true} \quad \mu = p'\sigma \\
\text{iTrn} & \quad s \xrightarrow{\alpha} s' \\
\text{iASY} & \quad s \xrightarrow{\tau} s' \\
\text{iINS} & \quad m \xrightarrow{\bullet, \mu} n \\
\text{iTER} & \quad s \xrightarrow{\gamma, \mu} m' \xrightarrow{\gamma} n \\
\text{id} & \quad m[s] \xrightarrow{\mu} n[s']
\end{align*}
\]
Figure 2 A model for transducers \((I \text{ is a finite index set and } m \xrightarrow{\gamma} n \text{ means } \exists \mu, n \cdot m \xrightarrow{\gamma, \mu} n)\).
assumes a well-formedness constraint where for every \([p, c, p'], m\), \(\text{bv}(c) \cup \text{bv}(p') = \emptyset\). Intuitively, a transition \(m \xrightarrow{\gamma, \mu} n\) denotes the fact that the transducer in state \(m\) transforms the visible action \(\alpha\) (produced by the system) into the action \(\mu\) (which can possibly become silent) and transitions into state \(n\). In this sense, the transducer action \(\alpha \tau\) represents the suppression of action \(\alpha\), action \(\alpha \bullet \beta\) represents the replacing of \(\alpha\) by \(\beta\), and \(\bullet \alpha\) denotes the identity transformation. The special case \(\bullet \alpha\) encodes the insertion of \(\alpha\), where \(\bullet\) represents that the transition is not induced by any system action.
The key transition rule in Figure 2 is \(\text{eTRN}\). It states that the symbolic-prefix transducer \([p, c, p'].m\) can transform an (extended) action \(\gamma\) into the concrete action \(\mu\), as long as the action matches with pattern \(p\) with substitution \(\sigma\), \(\text{mtch}(p, \gamma) = \sigma\), and the condition is satisfied by \(\sigma\), \(e \sigma \downarrow \text{true}\) (the matching function is lifted to extended actions and patterns in the obvious way, where \(\text{mtch}(\bullet, \bullet) = \emptyset\)). In such a case, the transformed action is \(\mu = p'\sigma\), i.e., the action \(\mu\) resulting from the instantiation of the free data variables in pattern \(p'\) with the corresponding values mapped by \(\sigma\), and the transducer state reached is \(m\sigma\). By contrast, in rule \(\text{eID}\), the transducer \(\text{id}\) acts as the identity and leaves actions unchanged. The remaining rules are fairly standard and unremarkable.
Figure 2 also describes an instrumentation relation which relates the behaviour of the SuS \(s\) with the transformations of a transducer monitor \(m\) that agrees with the observable actions \(\text{ACT}\) of \(s\). The term \(m[s]\) thus denotes the resulting monitored system whose behaviour is defined in terms of \(\text{ACT} \cup \{\tau\}\) from the system’s LTS. Concretely, rule \(\text{iTrn}\) states that when a system \(s\) transitions with an observable action \(\alpha\) to \(s'\) and the transducer \(m\) can transform this action into \(\mu\) and transition to \(n\), the instrumented system \(m[s]\) transitions with action \(\mu\) to \(n[s']\). However, when \(s\) transitions with a silent action, rules \(\text{iASY}\) allows it to do so independently of the transducer. Dually, rule \(\text{iINS}\) allows the transducer to insert an action \(\mu\) independently of \(s\)’s behaviour. Rule \(\text{iTER}\) is analogous to standard monitor instrumentation rules for premature termination of the transducer [22, 25, 23, 1], and accounts for underspecification of transformations. Thus, if a system \(s\) transitions with an observable action \(\alpha\) to \(s'\), and the transducer \(m\) does not specify how to transform it \((m \xrightarrow{\gamma} \emptyset)\), nor can it transition to a new transducer state by inserting an action \((m \xrightarrow{\beta} \emptyset)\), the system is still allowed to transition while the transducer’s transformation activity is ceased, i.e., it acts like the identity id from that point onwards.
Consider the insertion transducer $m_i$ and the replacement transducer $m_r$ below:
$$m_i \equiv \{e, true, i?req\}, \{e, true, i!ans\}, id$$
$$m_r \equiv \text{rec } x.((d)?req, true, j?req).x + [(d)!ans, true, j!ans].x + [(d)!cls, true, j!cls].x)$$
When instrumented with a system, $m_i$ inserts the two successive actions $i?req$ and $i!ans$ before behaving as the identity. Concretely in the case of $s_b$, we can only start the computation as:
$$m_i[s_b] \xrightarrow{i?req} \{e, true, i!ans\}.id[s_b] \xrightarrow{i!ans} id[s_b] \xrightarrow{\alpha} \ldots \text{ (where } s_b \xrightarrow{\alpha})$$
By contrast, $m_r$ transforms input actions with either payload $req$ or $cls$ and output actions with payload $ans$ on any port name, into the respective actions on port $j$. For instance:
$$m_r[s_b] \xrightarrow{j?req} m_r[i!ans, s_b] \xrightarrow{i!ans} m_r[s_b] \xrightarrow{j!cls} m_r[nil]$$
Consider now the two suppression transducers $m_s$ and $m_t$ for actions on ports other than $j$:
$$m_s \equiv \text{rec } x.((d)!?req, d \neq j, \tau).x + [(d)!ans, true, d!ans].x$$
$$m_t \equiv \text{rec } x.((d)!?req, d \neq j, d?!req, rec y.((d)!ans, true, d!ans].x + [d?!req, true, \tau].y))$$
Monitor $m_s$ suppresses any requests on ports other than $j$, and continues to do so after any answers on such ports. When instrumented with $s_b$, we can observe the following behaviour:
$$m_s[s_b] \xrightarrow{\tau} m_s[i!ans, s_b] \xrightarrow{i!ans} m_s[s_b] \xrightarrow{\tau} m_s[i!ans, s_b] \xrightarrow{i!ans} m_s[s_b] \ldots$$
Note that $m_s$ does not specify a transformation behaviour for when the monitored system produces inputs with payload other than $req$. The instrumentation handles this underspecification by ceasing suppression activity; in the case of $s_b$, we get $m_s[s_b] \xrightarrow{i!cls} id[nil]$. The transducer $m_t$ performs slightly more elaborate transformations. For interactions on ports other than $j$, it suppresses consecutive input requests following any serviced request (i.e., an input on $req$ followed by an output on $ans$) sequence. For $s_b$, we can observe the following:
$$m_t[s_b] \xrightarrow{i?req} \text{rec } y.((i!ans, true, i!ans], m_t + [i?!req, true, \tau].y)|s_b\]$$
$$\xrightarrow{\tau} \text{rec } y.((i!ans, true, i!ans], m_t + [i?!req, true, \tau].y)|i!ans, s_b] \xrightarrow{i!ans} m_t[s_b]$$
In the sequel, we find it convenient to refer to $p$ as the transformed pattern $p$ where all the binding occurrences $(d)$ are converted to free occurrences $d$. As shorthand notation, we elide the second pattern $p'$ in a transducer $\{p, c, p'\}, m$ whenever $p' = p$ and simply write $\{p, c\}, m$; note that if $bv(p) = \emptyset$, then $p = p$. Similarly, we elide $c$ whenever $c = true$. This allows us to express $m_t$ from Example 2 as $\text{rec } x.((d)!?req, d \neq j, \text{rec } y.((d)!ans, x + [d?!req, \tau].y))$.
### 4 Enforceability
The enforceability of a logic rests on the relationship between the semantic behaviour specified by the logic on the one hand, and the ability of the operational mechanism (the transducers and instrumentation of Section 3 in our case) to enforce the specified behaviour on the other.
**Definition 3 (Enforceability).** A logic $\mathcal{L}$ is enforceable iff every formula $\varphi \in \mathcal{L}$ is enforceable. A formula $\varphi$ is enforceable iff there exists a transducer $m$ such that $m$ enforces $\varphi$.
#### Example 2. Consider the insertion transducer $m_i$ and the replacement transducer $m_r$ below:
Definition 3 depends on what is considered to be an adequate definition for “$m$ enforces $\varphi$”. It is reasonable to expect that the latter definition should concern any system that the transducer $m$—hereafter referred to as the enforcer—is instrumented with. In particular, for any system $s$, the resulting composite system obtained from instrumenting the enforcer $m$ with it should satisfy the property of interest, $\varphi$, whenever this property is satisfiable.
Definition 4 (Sound Enforcement). Enforcer $m$ soundly enforces a formula $\varphi$, denoted as $\text{senf}(m, \varphi)$, iff for all $s \in \text{SYS}$, $\varphi \in \text{Sat}$ implies $m[s] |\varphi|$ holds.
Example 5. Recall $\varphi_1$, $s_g$ and $s_b$ from Example 1 where $s_g |\varphi_1|$ (hence $\varphi_1 \in \text{Sat}$) and $s_b \not\in |\varphi_1|$. For the enforcers $m_1$, $m_r$, $m_s$ and $m_t$ presented in Example 2, we have:
1. $m_1[s_b] \not\in |\varphi_1|$, since $m_1[s_b] \xrightarrow{\text{req}} \xrightarrow{\text{data}} \text{id}[s_b] \xrightarrow{\text{req}} \text{id}[s_b] \xrightarrow{\text{req}} \text{id}[s_b]$. This counterexample implies that $\neg\text{senf}(m_1, \varphi_1)$.
2. $m_r[s_g] \in |\varphi_1|$ and $m_r[s_b] \in |\varphi_1|$. Intuitively, this is because the ensuing instrumented systems only generate (replaced) actions that are not of concern to $\varphi_1$. Since this behaviour applies to any system $m_r$ is composed with, we can conclude that $\text{senf}(m_r, \varphi_1)$.
3. $m_s[s_g] \in |\varphi_1|$ and $m_s[s_b] \in |\varphi_1|$ because the resulting instrumented systems never produce inputs with `req` on a port number other than $j$. We can thus conclude that $\text{senf}(m_s, \varphi_1)$.
4. $m_t[s_g] \in |\varphi_1|$ and $m_t[s_b] \in |\varphi_1|$. Since the resulting instrumentation suppresses consecutive input requests (if any) after any number of serviced requests on any port other than $j$, we can conclude that $\text{senf}(m_t, \varphi_1)$.
By some measures, sound enforcement is a relatively weak requirement for adequate enforcement as it does not regulate the extent of the induced enforcement. More concretely, consider the case of enforcer $m_a$ from Example 2. Although $m_a$ manages to suppress the violating executions of system $s_b$, thereby bringing it in line with property $\varphi_1$, it needlessly modifies the behaviour of $s_g$ (namely it prohibits it from producing any inputs with `req` on port numbers that are not $j$), even though it satisfies $\varphi_1$. Thus, in addition to sound enforcement we require a transparency condition for adequate enforcement. The requirement dictates that whenever a system $s$ already satisfies the property $\varphi$, the assigned enforcer $m$ should not alter the behaviour of $s$. Put differently, the behaviour of the enforced system should be behaviourally equivalent to the original system.
Definition 6 (Transparent Enforcement). An enforcer $m$ is transparent when enforcing a formula $\varphi$, denoted as $\text{tenf}(m, \varphi)$, iff for all $s \in \text{SYS}$, $s \in |\varphi|$ implies $m[s] \sim s$.
Example 7. We have already argued—via the counter example $s_g$—why $m_a$ does not transparently enforce $\varphi_1$. We can also argue easily why $\neg\text{tenf}(m_r, \varphi_1)$: the simple system $i\text{req.nil}$ trivially satisfies $\varphi_1$ but, clearly, we have the inequality $m_r[i\text{ req.nil}] \not\sim i\text{req.nil}$ since $m_r[i\text{ req.nil}] \xrightarrow{\text{req}} m_r[\text{nil}]$ and $i\text{ req.nil} \xrightarrow{\text{req}}$.
It turns out that enforcer $\text{tenf}(m_s, \varphi_1)$, however. Although this property is not as easy to show—due to the universal quantification over all systems—we can get a fairly good intuition for why this is the case via the example $s_g$: it satisfies $\varphi_1$ and $m_s[s_g] \sim s_g$ holds.
Definition 8 (Enforcement). A monitor $m$ enforces property $\varphi$ whenever it does so (i) soundly, Definition 4 and (ii) transparently, Definition 6.
For any reasonably expressive logic (such as $\mu$HML), it is usually the case that not every formula can be enforced, as the following example informally illustrates.
ϕ, ψ ∈ sHML ::= tt | ff | \(\bigwedge_{i \in I} \psi_i\) | \([p, c]i\psi\) | X | max X.ϕ
Figure 3 The syntax for the safety \(\mu\)HML fragment, sHML.
Example 9. Consider the \(\mu\)HML property \(\varphi_{ns}\), together with the two systems \(s_{ra}\) and \(s_r\):
\[
\varphi_{ns} \equiv [i?\text{req}]ff \lor [i!\text{ans}]ff \quad s_{ra} \equiv i?\text{req}.\text{nil} + i!\text{ans}.\text{nil} \quad s_r \equiv i?\text{req}.\text{nil}
\]
A system satisfies \(\varphi_{ns}\) if either it cannot produce action \(i?\text{req}\) or it cannot produce action \(i!\text{ans}\). Clearly, \(s_{ra}\) violates this property as it can produce both. This system can only be enforced via action suppressions or replacements because insertions would immediately break transparency. Without loss of generality, assume that our monitors employ suppressions (the same argument applies for action replacement). The monitor \(m_r \equiv \text{rec}\ y. ([i?\text{req}, \tau] y + [i!\text{ans}, \tau] y)\) would in fact be able to suppress the offending actions produced by \(s_{ra}\), thus obtaining \(m_r[s_{ra}] \in \llbracket \varphi_{ns} \rrbracket\).
However, it would also suppress the sole action \(i?\text{req}\) produced by the system \(s_r\), even though this system satisfies \(\varphi_{ns}\). This would, in turn, violate the transparency criterion of Definition 6 since it needlessly suppresses \(s_r\)'s actions, i.e., although \(s_r \in \llbracket \varphi_{ns} \rrbracket\) we have \(m_r[s_r] \not\sim s_r\). The intuitive reason for this problem is that a monitor cannot, in principle, look into the computation graph of a system, but is limited to the behaviour the system exhibits at runtime.
5 Synthesising Suppression Enforcers
Despite their merits, Definitions 3 and 8 are not easy to work with. The universal quantifications over all systems in Definitions 4 and 6 make it hard to establish that a monitor correctly enforces a property. Moreover, according to Definition 3, in order to determine whether a particular property is enforceable or not, one would need to show the existence of a monitor that correctly enforces it; put differently, showing that a property is not enforceable entails another universal quantification, this time showing that no monitor can possibly enforce the property. Lifting the question of enforceability to the level of a (sub)logic entails a further universal quantification, this time on all the logical formulas of the logic; this is often an infinite set. We address these problems in two ways. First, we identify a non-trivial syntactic subset of \(\mu\)HML that is guaranteed to be enforceable; in a multi-pronged approach to system verification, this could act as a guide for whether the property should be considered at a pre-deployment or post-deployment phase. Second, for every formula \(\varphi\) in this enforceable subset, we provide an automated procedure to synthesise a monitor \(m\) from it that correctly enforces \(\varphi\) when instrumented over arbitrary systems, according to Definition 8. This procedure can then be used as a basis for constructing tools that automate property enforcement.
In this paper, we limit our enforceability study to suppression monitors, transducers that are only allowed to intervene by dropping (observable) actions. Despite being more constrained, suppression monitors side-step problems associated with what data to use in a payload-carrying action generated by the enforcer, as in the case of insertion and replacement monitors: the notion of a default value for certain data domains is not always immediate. Moreover, suppression monitors are particularly useful for enforcing safety properties, as shown in [33, 10, 20]. Intuitively, a suppression monitor would suppress actions as soon as it becomes apparent that a violation is about to be committed by the SuS. Such an intervention intrinsically relies on the detection of a violation. To this effect, we use a prior result from
At an intuitive level, the suppression monitor that one would expect to obtain for the
This also facilitates the proofs for the conditions required by Definition 8 for any synthesised
which is amenable to a (straightforward) synthesis function definition that is compositional.
would then be combined in the synthesis for
whereas the monitor obtained for the subformula
\( \varphi \), whereas the monitor obtained for the subformula \( \varphi \). Moreover, we would also
require the synthesis function to be compositional, whereby the definition of the enforcer
for a composite formula is defined in terms of the enforcers obtained for the constituent
subformulas. There are a number of reasons for this requirement. For one, it would simplify
our analysis of the produced monitors and allow us to use standard inductive proof techniques
to prove properties about the synthesis function, such as the aforementioned criteria \( (ii) \).
However, a naive approach to such a scheme is bound to fail, as discussed in the next example.
\[ \varphi_2 \equiv \max X.\{(d)?\text{req}, d \neq j\}[[d]!\text{ans}, \text{true}]|X) \land \{(d)?\text{req}, d \neq j\}[[d]!\text{req}, \text{true}]|\text{ff} \]
At an intuitive level, the suppression monitor that one would expect to obtain for the
subformula \( \varphi_2' \equiv \{(d)?\text{req}, d \neq j\}[[d]!\text{req}, \text{true}]|\text{ff} \) is \( \{(d)?\text{req}, d \neq j\}.\text{rec} y.[d]!\text{req}, \tau].y \) (i.e., an
enforcer that repeatedly drops any \( \text{req} \) inputs following a \( \text{req} \) input on the same port),
whereas the monitor obtained for the subformula \( \varphi_2'' \equiv \{(d)?\text{req}, d \neq j\}[[d]!\text{ans}, \text{true}]|X \) is
\( \{(d)?\text{req}, d \neq j\}[[d]!\text{ans}, x \) (assuming some variable mapping from \( X \) to \( x \)). These monitors
would then be combined in the synthesis for \( \max X.\varphi_2' \land \varphi_2'' \) as
\[ m_b \equiv \text{rec} x.\{(d)?\text{req}, d \neq j\}[[d]!\text{ans}, x] + \{(d)?\text{req}, d \neq j\}.\text{rec} y.[d]!\text{req}, \tau].y \]
One can easily see that \( m_b \) does not behave deterministically, nor does it soundly enforce
\( \varphi_2 \). For instance, for the violating system \( i?\text{req}, i?\text{req}, \text{nil} \not\in [\varphi_2](=[\varphi_1]) \) we can observe the
transition sequence \( m_b[i?\text{req}, i?\text{req}, \text{nil}] \xrightarrow{\text{i?req}} [[\text{ans}], m_b[i?\text{req}, \text{nil}]] \xrightarrow{\text{i?req}} [\text{nil}] \).
Instead of complicating our synthesis function to cater for anomalies such as those
presented in Example 10 – also making it less compositional in the process – we opted for a
two-stage synthesis procedure. First, we consider a normalised subset for sHML formulas
which is amenable to a (straightforward) synthesis function definition that is compositional.
This also facilitates the proofs for the conditions required by Definition 8 for any synthesised
enforcer. Second, we show that every sHML formula can be reformulated in this normalised
form without affecting its semantic meaning. We can then show that our two-stage approach
is expressive enough to show the enforceability for all of sHML.
\[ \varphi, \psi \in \text{sHML}_{nf} ::= \text{tt} | \text{ff} | \bigwedge_{i \in I} [p_i, c_i]\varphi_i | X | \max X.\varphi . \]
The above grammar combines necessity operators with conjunctions into one construct
\( \bigwedge_{i \in I} [p_i, c_i]\varphi_i \). Normalised sHML formulas are required to satisfy two further conditions:
1. For every \( \bigwedge_{i \in I} [p_i, c_i]\varphi_i \), for all \( j, h \in I \) where \( j \neq h \) we have \( [p_j, c_j] \land [p_h, c_h] = \emptyset \).
2. For every \( \max X.\varphi \) we have \( X \in \text{fv}(\varphi) \).
In a (closed) normalised sHML formula, the basic terms tt and ff can never appear un-guarded unless they are at the top level (e.g., we can never have \( \varphi \land ff \) or \( \max X_0 \ldots \max X_n.ff \)). Moreover, in any conjunction ofecessity subformulas, \( \bigwedge_{i \in I} [[p_i, c_i]] \varphi_i \), the necessity guards are disjoint and at most one necessity guard can satisfy any particular action.
Definition 12. The synthesis function \( \langle - \rangle : sHML_{nf} \rightarrow Trn \) is defined inductively as:
\[
\langle X \rangle \triangleq x \\
\langle tt \rangle \triangleq \langle ff \rangle \triangleq id \\
\langle \bigwedge_{i \in I} [[p_i, c_i]] \varphi_i \rangle \triangleq rec_y . \sum \{ \{p_i, c_i, \tau_i\}, y \mid \{p_i, c_i, p_i\}, \langle \varphi_i \rangle \} \quad \text{if } \varphi_i = ff \\
\langle \varphi_i \rangle \quad \text{otherwise}
\]
The synthesis function is compositional. It assumes a bijective mapping between formula variables and monitor recursion variables and converts logical variables \( X \) accordingly, whereas maximal fixpoints, \( \max X. \varphi \), are converted into the corresponding recursive enforcer. The synthesis also converts truth and falsehood formulas, \( tt \) and \( ff \), into the identity enforcer \( id \). Normalized conjunctions, \( \bigwedge_{i \in I} [[p_i, c_i]] \varphi_i \), are synthesised into a recursive summation of enforcers, i.e., \( rec_y . m_i \), where \( y \) is fresh, and every branch \( m_i \) can be either of the following:
(i) when \( m_i \) is derived from a branch of the form \( [[p_i, c_i]] \varphi_i \) where \( \varphi_i \neq ff \), the synthesis produces an enforcer with the identity transformation prefix, \( \{p_i, c_i, p_i\} \), followed by the enforcer synthesised from the continuation \( \varphi_i \), i.e., \( \{p_i, c_i\} \varphi_i \) is synthesised as \( \{p_i, c_i, p_i\}, \langle \varphi_i \rangle \);
(ii) when \( m_i \) is derived from a branch of the form \( [[p_i, c_i]]ff \), the synthesis produces a suppression transformation, \( \{p_i, c_i, \tau\} \), that drops every concrete action matching the symbolic action \( \{p_i, c_i\} \), followed by the recursive variable of the branch \( y \), i.e., a branch of the form \( [[p_i, c_i]]ff \) is translated into \( \{p_i, c_i\} \varphi_i \).
Example 13. Recall formula \( \varphi_1 \) from Example 1, recast in term of sHML_{nf}’s grammar:
\[
\varphi_1 \triangleq \max X. \bigwedge \{ ([d]?req, d \neq j] \} \{ [d]!ans, true] X \land [d]?req, true] ff \}
\]
Using the synthesis function defined in Definition 12, we can generate the enforcer
\[
\langle \varphi_1 \rangle = rec_x . rec_y . \sum \{ ([d]?req, d \neq j], rec_y . ([d]!ans, true], x + [d]?req, true, \tau \}, y \}
\]
which can be optimized by removing redundant recursive constructs (e.g., \( rec_z . \_ \)), obtaining:
\[
= rec_x . ([d]?req, d \neq j], rec_y . ([d]!ans, true], x + [d]?req, true, \tau, y \} = m_t
\]
We now present the first main result to the paper.
Theorem 14 (Enforcement). The (sub)logic sHML_{nf} is enforceable.
Proof. By Definition 3, the result follows if we show that for all \( \varphi \in sHML_{nf} \), \( \langle \varphi \rangle \) enforces \( \varphi \). By Definition 8, this is a corollary following from Propositions 15 and 16 stated below.
Proposition 15 (Enforcement Soundness). For every system \( s \in Sys \) and \( \varphi \in sHML_{nf} \) then \( \varphi \in \text{SAT} \) implies \( \langle \varphi \rangle[s] \in [\varphi] \).
Proposition 16 (Enforcement Transparency). For every system \( s \in Sys \) and \( \varphi \in sHML_{nf} \) then \( s \in [\varphi] \) implies \( \langle \varphi \rangle[s] \sim s \).
Following Theorem 14, to show that sHML is an enforceable logic, we only need to show that for every \( \varphi \in \text{sHML} \) there exists a corresponding \( \psi \in \text{sHML}_{nf} \) with the same semantic meaning, i.e., \([\varphi]= [\psi]\). In fact, we go a step further and provide a constructive proof using a transformation \(\{\langle\langle - \rangle\rangle\} : \text{sHML} \mapsto \text{sHML}_{nf} \) that derives a semantically equivalent \(\text{sHML}_{nf}\) formula from a standard sHML formula. As a result, from an arbitrary sHML formula \( \varphi \) we can then automatically synthesise a correct enforcer using \(\{\langle\langle \varphi \rangle\rangle\} \) which is useful for tool construction.
Our transformation \(\{\langle\langle - \rangle\rangle\} \) relies on a number of steps; here we provide an outline of these steps. First, we assume sHML formulas that only use symbolic actions with normalised patterns \( p \), i.e., patterns that do not use any data or free data variables (but they may use bound data variables). In fact, any symbolic action \( \{p, c\} \) can be easily converted into a corresponding one using normalised patterns as shown in the next example.
**Example 17.** Consider the symbolic action \( \{d, \text{ans}, d \not= j\} \). It may be converted to a corresponding normalised symbolic action by replacing every occurrence of a data or free data variable in the pattern by a fresh bound variable, and then add an equality constraint between the fresh variable and the data or data variable it replaces in the pattern condition. In our case, we would obtain \( \{(e)!f(f), d \not= j \land e=d \land f=\text{ans}\} \).
Our algorithm for converting sHML formulas (with normalised patterns) to sHML\(_{nf}\) formulas, \(\{\langle\langle - \rangle\rangle\} \), is based on Rabinovich’s work [41] for determinising systems of equations which, in turn relies on the standard powerset construction for converting NFAs into DFAs. It consists in the following six stages that we outline below:
1. **We unfold each recursive construct in the formula, to push recursive definitions inside the formula body.** E.g., the formula \( \max X.([p_1, c_1]X \land [p_2, c_2]ff) \) is expanded to the formula \( ([p_1, c_1])(\max X.([p_1, c_1]X \land [p_2, c_2]ff) \land [p_2, c_2]ff) \).
2. **The formula is converted into a system of equations.** E.g., the expanded formula from the previous stage is converted into the set \( \{X_0 = [p_1, c_1]X_0 \land [p_2, c_2]X_1, X_1 = ff\} \).
3. **For every equation, the symbolic actions in the right hand side that are of the same kind are alpha-converted so that their bound variables match.** E.g., Consider \( X_0 = [p_1, c_1]X_0 \land [p_2, c_2]X_1 \) from the previous stage where, for the sake of the example, \( p_1 = (d_1)?(d_2) \) and \( p_2 = (d_3)?(d_4) \). The patterns in the symbolic actions are made syntactically equivalent by renaming \( d_3 \) and \( d_4 \) in \( \{p_2, c_2\} \) into \( d_1 \) and \( d_2 \) respectively.
4. **For equations with matching patterns in the symbolic actions, we create a variant that symbolically covers all the (satisfiable) permutations on the symbolic action conditions.** E.g., Consider \( X_0 = [p_1, c_1]X_0 \land [p_2, c_2]X_1 \) from the previous stage. We expand this to \( X_0 = [p_1, c_1 \land c_3]X_0 \land [p_1, c_2 \land c_3]X_1 \land [p_1, c_1 \land \neg(c_3)]X_0 \land [p_1, \neg(c_1) \land c_3]X_1 \).
5. **For equations with branches having syntactically equivalent symbolic actions, we carry out a unification procedure akin to standard powerset constructions.** E.g., we convert the equation from the previous step to \( X_{(0)} = [p_1, c_1 \land c_3]X_{(0)} \land [p_1, c_2 \land \neg(c_3)]X_{(0)} \land [p_1, \neg(c_1) \land c_3]X_{(1)} \) using the (unified) fresh variables \( X_{(0)}, X_{(1)} \) and \( X_{(0),1} \).
6. **From the unified set of equations we generate again the sHML formula starting from \( X_{(0)} \). This procedure may generate redundant recursion binders, i.e., \( \max X.\varphi \) where \( X \not\in \text{fv}(\varphi) \), and we filter these out in a subsequent pass.
We now state the second main result of the paper.
**Theorem 18 (Normalisation).** For any \( \varphi \in \text{sHML} \) there exists \( \psi \in \text{sHML}_{nf} \) s.t. \( [\varphi]= [\psi] \).
**Proof.** The witness formula in normal form is \( \langle\langle \varphi \rangle\rangle \), where we show that each and every stage in the translation procedure preserves semantic equivalence.
\[\blacksquare\]
6 Alternative Transparency Enforcement
Transparency for a property $\varphi$, Definition 6, only restricts enforcers from modifying the behaviour of satisfying systems, i.e., when $s \vDash [\varphi]$, but fails to specify any enforcement behaviour for the cases when the SuS violates the property $s \not\vDash [\varphi]$. In this section, we consider an alternative transparency requirement for a property $\varphi$ that incorporates the expected enforcement behaviour for both satisfying and violating systems. More concretely, in the case of safety languages such as sHML, a system typically violates a property along a specific set of execution traces; in the case of a satisfying system this set of “violating traces” is empty. However, not every behaviour of a violating system would be part of this set of violating traces and, in such cases, the respective enforcer should be required to leave the generated behaviour unaffected.
Definition 19 (Violating-Trace Semantics). A logic $\mathcal{L}$ with an interpretation over systems $[-]: \mathcal{L} \mapsto \mathcal{P}(\text{SYS})$ has a violating-trace semantics whenever it has a secondary interpretation $[-]: \mathcal{L} \mapsto \mathcal{P}(\text{SYS} \times \text{ACT}^*)$ satisfying the following conditions for all $\varphi \in \mathcal{L}$:
1. $(s, t) \in [\varphi]$, implies $s \not\vDash [\varphi]$ and $s \not\triangleright$,
2. $s \not\vDash [\varphi]$ implies $\exists t \cdot (s, t) \in [\varphi]$.
We adapt the work in [26] to give sHML a violating-trace semantics. Intuitively, the judgement $(s, t) \in [\varphi]_v$, according to Definition 20 below, denotes the fact that $s$ violates the sHML property $\varphi$ along trace $t$.
Definition 20 (Alternative Semantics for sHML [26]). The forcing relation $\vdash_v \subseteq (\text{SYS} \times \text{ACT}^* \times \text{sHML})$ is the least relation satisfying the following rules:
\[
\begin{align*}
(s, \epsilon, \text{ff}) \in \mathcal{R} & \quad \text{always} \\
(s, t, \bigwedge_{i \in I} \varphi_i) \in \mathcal{R} & \quad \text{if } \exists j \in I \text{ such that } (s, t, \varphi_j) \in \mathcal{R} \\
(s, t, \top, [p, c][\varphi]) \in \mathcal{R} & \quad \text{if } \text{mtch}(p, c)\sigma \Downarrow \text{true} \text{ and } s \not\triangleright s' \text{ and } (s', t, \varphi\sigma) \in \mathcal{R} \\
(s, t, \text{max } X. \varphi) \in \mathcal{R} & \quad \text{if } (s, t, \varphi\{\text{max } X. \varphi/X\}) \in \mathcal{R}.
\end{align*}
\]
We write $s, t \vdash_v \varphi$ (or $(s, t) \in [\varphi]_v$) in lieu of $(s, t, \varphi) \in \vdash_v$. We say that trace $t$ is a violating trace for $s$ with respect to $\varphi$ whenever $s, t \vdash_v \varphi$. Dually, $t$ is a non-violating trace for $\varphi$ whenever there does not exist a system $s$ such that $s, t \vdash_v \varphi$.
Example 21. Recall $\varphi_1, s_b$ from Example 1 where $\varphi_1 \in \text{sHML}$, and also $m_t$ from Example 5 where we argued in Example 13 that $\{\varphi_1\} = m_t$ (modulo cosmetic optimisations). Even though $s_b \not\vDash [\varphi_1]$, not all of its exhibited behaviours constitute violating traces: for instance,
\[
\begin{align*}
s_b \overset{\text{req transaction}}{\longrightarrow} s_b \quad \text{is not a violating trace according to Definition 20. Correspondingly, we also have } m_t[s_b] \overset{\text{req transaction}}{\longrightarrow} m_t[s_b].
\end{align*}
\]
Theorem 22 (Adapted and extended from [26]). The alternative interpretation $[-]_v$ of Definition 20 is a violating-trace semantics for sHML (with $[-]$ from Figure 1) in the sense of Definition 19.
Equipped with Definition 20 we can define an alternative definition for transparency that concerns itself with preserving exhibited traces that are non-violating. We can then show that the monitor synthesis for sHML of Definition 12 observes non-violating trace transparency.
Definition 23 (Non-Violating Trace Transparency). An enforcer \( m \) is transparent with respect to the non-violating traces of a formula \( \varphi \), denoted as \( \text{nvtenf}(m, \varphi) \), iff for all \( s \in \text{Sys} \) and \( t \in \text{Act}^* \), when \( s, t \not\vdash_v \varphi \) then
\[
\begin{align*}
s \xrightarrow{t} s' \text{ implies } m[s] \xrightarrow{t'} m'[s'] \text{ for some } m', \text{ and} \\
m[s] \xrightarrow{t} m'[s'] \text{ implies } s \xrightarrow{t} s'.
\end{align*}
\]
Proposition 24 (Non-Violating Trace Transparency). For all \( \varphi \in \text{sHML} \), \( s \in \text{Sys} \) and \( t \in \text{Act}^* \), when \( s, t \not\vdash_v \varphi \) then
\[
\begin{align*}
s \xrightarrow{t} s' \text{ implies } \langle \varphi \rangle[s] \xrightarrow{t} m'[s'], \text{ and} \\
\langle \varphi \rangle[s] \xrightarrow{t} m'[s'] \text{ implies } s \xrightarrow{t} s'.
\end{align*}
\]
We can thus obtain a new definition for “\( m \) enforces \( \varphi \)” instead of Definition 8 by requiring sound enforcement, Definition 6, and non-violating trace transparency, Definition 23 (instead of the transparent enforcement of Definition 6). This in turn gives us a new definition for enforceability for a logic, akin to Definition 3. Using Propositions 15 and 24, one can show that \( \text{sHML} \) is also enforceable with respect to the new definition as well.
7 Conclusion
This paper presents a preliminary investigation of the enforceability of properties expressed in a process logic. We have focussed on a highly expressive and standard logic, \( \mu \text{HML} \), and studied the ability to enforce \( \mu \text{HML} \) properties via a specific kind of monitor that performs suppression-based enforcement. We concluded that \( \text{sHML} \), identified in earlier work as a maximally expressive safety fragment of \( \mu \text{HML} \), is also an enforceable logic. To show this, we first defined enforceability for logics and system descriptions interpreted over labelled transition systems. Although enforceability builds upon soundness and transparency requirements that have been considered in other work, our branching-time framework allowed us to consider novel definitions for these requirements. We also contend that the definitions that we develop for the enforcement framework are fairly modular: e.g., the instrumentation relation is independent of the specific language constructs defining our transducer monitors and it functions as expected as long as the transition semantics of the transducer and the system are in agreement. Based on this notion of enforcement, we devise a two-phase procedure to synthesise correct enforcement monitors. We first identify a syntactic subset of our target logic \( \text{sHML} \) that affords certain structural properties and permits a compositional definition of the synthesis function. We then show that, by augmenting existing rewriting techniques to our setting, we can convert any \( \text{sHML} \) formula into this syntactic subset.
Related Work
In his seminal work [44], Schneider regards a property (in a linear-time setting) to be enforceable if its violation can be detected by a truncation automaton, and prevents its occurrence via system termination; by preventing misbehaviour, these enforcers can only enforce safety properties. Ligatti et al. in [33] extended this work via edit automata – an enforcement mechanism capable of suppressing and inserting system actions. A property is thus enforceable if it can be expressed as an edit automaton that transforms invalid executions into valid ones via suppressions and insertions. Edit automata are capable of enforcing instances of safety and liveness properties, along with other properties such as infinite renewal properties [33, 10]. As a means to assess the correctness of these automata, the authors introduced soundness and transparency. In both of these settings, there is no
CONCUR 2018
clear separation between the specification and the enforcement mechanism, and properties
are encoded in terms of the languages accepted by the enforcement model itself, i.e., as edit/truncation automata. By contrast, we keep the specification and verification aspects of the logic separate.
Bielova et al. [10, 11] remark that soundness and transparency do not specify to what extent a transducer should modify an invalid execution. They thus introduce a predictability criterion to prevent transducers from transforming invalid executions arbitrarily. More concretely, a transducer is predictable if one can predict the number of transformations that it will apply in order to transform an invalid execution into a valid one, thereby preventing enforcers from applying unnecessary transformations over an invalid execution. Using this notion, Bielova et al. thus devise a more stringent notion of enforceability. Although we do not explore this avenue, Definition 23 may be viewed as an attempt to constrain transformations of violating systems in a branching-time setup, and should be complementary to these predictability requirements.
Könighofer et al. in [29] present a synthesis algorithm that produces action replacement transducers called shields from safety properties encoded as automata-based specifications. Shields analyse the inputs and outputs of a reactive systems and enforce properties by modifying the least amount of output actions whenever the system deviates from the specified behaviour. By definition, shields should adhere to two desired properties, namely correctness and minimum deviation which are, in some sense, analogous to soundness and transparency respectively. Falcone et al. in [19, 21, 20], also propose synthesis procedures to translate properties — expressed as Streett automata — into the resp., enforcers. The authors show that most of the property classes defined within the Safety-Progress hierarchy [35] are enforceable, as they can be encoded as Streett automata and subsequently converted into enforcement automata. As opposed to Ligatti et al., both Könighofer et al. and Falcone et al. separate the specification of the property from the enforcement mechanism, but unlike our work they do not study the enforceability of a branching time logic.
To the best of our knowledge, the only other work that tackles enforceability for the modal μ-calculus [30] (a reformulation of μHML) is that of Martinelli et al. in [37, 38]. Their approach is, however, different from ours. In addition to the μ-calculus formula to enforce, their synthesis function also takes a “witness” system satisfying the formula as a parameter. This witness system is then used as the behaviour that is mimicked by the instrumentation via suppression, insertion or replacement mechanisms. Although the authors do not explore automated correctness criteria such as the ones we study in this work, it would be interesting to explore the applicability of our methods to their setting.
Bocchi et al. [12] adopt multi-party session types to project the global protocol specifications of distributed networks to local types defining a local protocol for every process in the network that are then either verified statically via typechecking or enforced dynamically via suppression monitors. To implement this enforcement strategy, the authors define a dynamic monitoring semantics for the local types that suppress process interactions so as to conform to the assigned local specification. They prove local soundness and transparency for monitored processes that, in turn, imply global soundness and transparency by construction. Their local enforcement is closely related to the suppression enforcement studied in our work with the following key differences: (i) well-formed branches in a session type are, by construction, explicitly disjoint via the use of distinct choice labels (i.e., similar to our normalised subset sHML_nf), whereas we can synthesise enforcers for every sHML formula using a normalisation procedure; (ii) they give an LTS semantics to their local specifications (which are session types) which allows them to state that a process satisfies a specification when its behaviour is bisimilar to the operational semantics of the local specification – we do
not change the semantics of our formulas, which is left in its original denotational form; (iii) they do not provide transparency guarantees for processes that violate a specification, along the lines of Definition 23; (iv) Our monitor descriptions sit at a lower level of abstraction than theirs using a dedicated language, whereas theirs have a session-type syntax with an LTS semantics (e.g., repeated suppressions have to be encoded in our case using the recursion construct while this is handled by their high-level instrumentation semantics).
In [14], Castellani et al. adopt session types to define reading and writing privileges amongst processes in a network as global types for information flow purposes. These global types are projected into local monitors capable of preventing read and write violations by adapting certain aspects of the network. Although their work is pitched towards adaptation [24, 13], rather than enforcement, in certain instances they adapt the network by suppressing messages or by replacing messages with messages carrying a default nonce value. It would be worthwhile investigating whether our monitor correctness criteria could be adapted or extended to this information-flow setting.
**Future Work**
We plan to extend this work along two different avenues. On the one hand, we will attempt to extend the enforceable fragment of μHML. For a start, we intend to investigate maximality results for suppression monitors, along the lines of [25, 2]. We also plan to consider more expressive enforcement mechanisms such as insertion and replacement actions. Finally, we will also investigate more elaborate instrumentation setups, such as the ones explored in [1], that can reveal refusals in addition to the actions performed by the system.
On the other hand, we also plan to study the implementability and feasibility of our framework. We will consider target languages for our monitor descriptions that are closer to an actual implementation (e.g., an actor-based language along the lines of [26]). We could then employ refinement analysis techniques and use our existing monitor descriptions as the abstract specifications that are refined by the concrete monitor descriptions. The more concrete synthesis can then be used for the construction of tools that are more amenable towards showing correctness guarantees.
**References**
6. Cyrille Artho, Howard Barringer, Allen Goldberg, Klaus Havelund, Sarfraz Khurshid, Michael R. Lowry, Corina S. Pasareanu, Grigore Rosu, Koushik Sen, Willem Visser, and
34:16 On Runtime Enforcement via Suppressions
|
{"Source-Url": "http://subs.emis.de/LIPIcs/volltexte/2018/9572/pdf/LIPIcs-CONCUR-2018-34_.pdf", "len_cl100k_base": 15131, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 71504, "total-output-tokens": 19075, "length": "2e13", "weborganizer": {"__label__adult": 0.00038361549377441406, "__label__art_design": 0.00047969818115234375, "__label__crime_law": 0.0005888938903808594, "__label__education_jobs": 0.0010538101196289062, "__label__entertainment": 0.00010097026824951172, "__label__fashion_beauty": 0.00018417835235595703, "__label__finance_business": 0.0003230571746826172, "__label__food_dining": 0.0003993511199951172, "__label__games": 0.0009713172912597656, "__label__hardware": 0.0011758804321289062, "__label__health": 0.000667572021484375, "__label__history": 0.0003554821014404297, "__label__home_hobbies": 0.0001245737075805664, "__label__industrial": 0.0005869865417480469, "__label__literature": 0.0005283355712890625, "__label__politics": 0.0004165172576904297, "__label__religion": 0.0005774497985839844, "__label__science_tech": 0.10284423828125, "__label__social_life": 0.00010156631469726562, "__label__software": 0.00904083251953125, "__label__software_dev": 0.8779296875, "__label__sports_fitness": 0.00026726722717285156, "__label__transportation": 0.0007300376892089844, "__label__travel": 0.00019168853759765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65051, 0.01874]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65051, 0.35516]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65051, 0.80168]], "google_gemma-3-12b-it_contains_pii": [[0, 2449, false], [2449, 6710, null], [6710, 9058, null], [9058, 13818, null], [13818, 18088, null], [18088, 21640, null], [21640, 25857, null], [25857, 29847, null], [29847, 33641, null], [33641, 37359, null], [37359, 41928, null], [41928, 45833, null], [45833, 49798, null], [49798, 54090, null], [54090, 57835, null], [57835, 61464, null], [61464, 65051, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2449, true], [2449, 6710, null], [6710, 9058, null], [9058, 13818, null], [13818, 18088, null], [18088, 21640, null], [21640, 25857, null], [25857, 29847, null], [29847, 33641, null], [33641, 37359, null], [37359, 41928, null], [41928, 45833, null], [45833, 49798, null], [49798, 54090, null], [54090, 57835, null], [57835, 61464, null], [61464, 65051, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65051, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65051, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65051, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65051, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65051, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65051, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65051, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65051, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65051, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65051, null]], "pdf_page_numbers": [[0, 2449, 1], [2449, 6710, 2], [6710, 9058, 3], [9058, 13818, 4], [13818, 18088, 5], [18088, 21640, 6], [21640, 25857, 7], [25857, 29847, 8], [29847, 33641, 9], [33641, 37359, 10], [37359, 41928, 11], [41928, 45833, 12], [45833, 49798, 13], [49798, 54090, 14], [54090, 57835, 15], [57835, 61464, 16], [61464, 65051, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65051, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
4bc266e0c25599ec8047ef0fc1d35e4b410235e1
|
“Synthesizing Input Grammars”: A Replication Study
Bachir Bendrissou
CISPA Helmholtz Center For
Information Security
Germany
bachir.bendrissou@cispa.de
Rahul Gopinath
CISPA Helmholtz Center For
Information Security
Germany
rahal.gopinath@cispa.de
Andreas Zeller
CISPA Helmholtz Center For
Information Security
Germany
zeller@cispa.de
Abstract
When producing test inputs for a program, test generators ("fuzzers") can greatly profit from grammars that formally describe the language of expected inputs. In recent years, researchers thus have studied means to recover input grammars from programs and their executions. The GLADE algorithm by Bastani et al., published at PLDI 2017, was the first black-box approach to claim context-free approximation of input specification for non-trivial languages such as XML, Lisp, URLs, and more.
Prompted by recent observations that the GLADE algorithm may show lower performance than reported in the original paper, we have reimplemented the GLADE algorithm from scratch. Our evaluation confirms that the effectiveness score (F1) reported in the GLADE paper is overly optimistic, and in some cases, based on the wrong language. Furthermore, GLADE fares poorly in several real-world languages evaluated, producing grammars that spend megabytes to enumerate inputs.
CCS Concepts: • Security and privacy → Software reverse engineering; • Software and its engineering → Software maintenance tools; Parsers.
Keywords: context-free grammar, inference, GLADE
ACM Reference Format:
1 Introduction
Generating test inputs for a program (“fuzzing”) is much more effective if the fuzzer knows the input language of the program under test—that is, the set of valid inputs that actually leads to deeper functionality in the program. Input languages are typically characterized by context-free grammars, and the recent interest in fuzzing thus has fueled research in recovering input grammars from existing programs.
The GLADE algorithm by Bastani et al., published in “Synthesizing Input Grammars” at PLDI 2017 [6], automatically approximates an input grammar from a given program. In contrast to other approaches, GLADE does not make use of program code to infer input properties. Instead, it relies on feedback from the program whether a given input is valid or not, and synthesizes a multitude of trial inputs to infer the input grammar. GLADE claims substantial improvement over existing algorithms both in terms of accuracy as well as in terms of speed of inference. In particular, GLADE claims better performance over even the current best regular language inference techniques such as L-Star [4] and RPNI [20]. Further, GLADE claims to be able to recover the input grammar for complex languages such as Ruby, Python, and JavaScript in a couple of hours [6, Figure 6].
In recent work [18], however, Kulkarni, Lemieux, and Sen found that the F1 scores—a measure for the accuracy of the inferred grammar—produced by the GLADE tool were much lower than the scores reported in the GLADE paper, for instance XML (0.42 compared to 0.98) and Lisp (0.38 compared to 0.97) [18, Table I].
This observation prompted us to investigate the GLADE algorithm in detail. Given that the algorithm reported in the paper is the central contribution, we reimplemented the GLADE algorithm completely from scratch using the algorithm description given in the paper [6].
We call the implementation by Bastani et al. GLADE-I, and we call our implementation GLADE-II to differentiate both where there is ambiguity. We used our implementation to evaluate synthesized grammars for programs given in the original paper [6]. These include URL, Grep, Lisp, and XML. We further evaluated GLADE-II on several other small grammars such as different parenthesis grammars, Ints, Decimals, and a few real-world complex grammars such as Lua.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
PLDI ’22, June 13–17, 2022, San Diego, CA, USA
© 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-9265-5/22/06...$15.00
https://doi.org/10.1145/3519939.3523716
1 Available in GitHub [2].
MySQL, Pascal, XPath, C, TinyC, Tiny, and Basic. Our evaluation uncovers a number of problems and limitations, which we summarize as follows.
1. The F1 score that we obtained from GLADE-II is much lower than the F1 scores reported by Bastani et al. This confirms the observation by Kulkarni et al. [18].
2. Bastani et al. use handwritten grammars for computing precision and recall. We found that the handwritten *grep* grammar was far more permissive than the program, resulting in spurious results.
3. The precision scores of simple real-world grammars such as Decimals (0.84), and JSON (0.53) is lower than expected considering the high values reported by GLADE for other programs, and considering their simplicity.
4. The recall of JSON (0.79) is lower than expected considering the simplicity of its specification.
5. Similar to *grep*, the XML grammar used by GLADE-I was more permissive than the actual XML specification.
6. GLADE is unable to learn and synthesize valid XML even when a correct XML grammar is used to learn from.
7. GLADE cannot learn trivial context-free languages such as $a^n b^n$ or the language of palindromes.
8. The synthesized grammars are extremely large, often megabytes in size that enumerate inputs.
Our implementation GLADE-II and all experiments are available online for inspection and replication.
## 2 Background
Formal specifications for input formats have a long tradition in computer science. Beyond specifying input languages and parsing inputs, grammars have been used for program input generation [15], reverse engineering [7], program refactoring [14], program comprehension [9, 21], and many more. The potential of grammars for producing syntactically valid inputs during test generation and fuzzing has raised interest in methods that recover input grammars from programs and/or given inputs.
An early result in grammar inference was the discovery by Gold [10] that learning an accurate input specification from exclusively positive examples was impossible, even when the specification complexity was limited to regular languages; even when negative examples are given, it is NP-hard [11].
Consequently, current practical approaches to inferring context-free grammars infer all make use of program executions. “Whitebox” approaches analyze code and dynamic control and data flow to extract compact input grammars that follow the structure of input processing [12, 16, 19]. “Blackbox” approaches, in contrast, extract grammars from membership queries, executing the program only to determine if an input is valid or not. Clark’s algorithm [8] uses a
### minimally adequate teacher (which can be simulated by membership queries [4]) to learn a subclass of context-free grammars. More recent “blackbox” approaches include GLADE by Bastani et al. [6] and Arvada by Kulkarni et al. [18], both set to learn general context-free grammars.
However, all “blackbox” approaches for learning general context-free grammars are fundamentally limited: In 1995, Angluin and Khariyono showed that unless RSA encryption is broken, there is no polynomial time prediction algorithm with query memberships for context-free grammars [5]. To be efficient, “blackbox” approaches thus need algorithms that are tailored towards the features of commonly used input languages.
## 3 The GLADE Algorithm
GLADE [6] is a grammar inference algorithm. It infers the context-free grammar of a black-box oracle capable of saying yes or no to membership queries. It also bootstraps itself with a set of positive examples. The GLADE paper implies that it can produce the context-free grammar even if the positive examples given do not cover all “interesting behaviors”.
The GLADE algorithm starts with a seed input $a_{in}$. Such a single seed input (or a set of seed inputs) is a finite choice grammar [17] with high precision (because it will never generate an invalid input) but very low recall. From this, GLADE performs a series of precision-preserving generalization steps that attempts to increase the recall. Each step produces more and more general regular expressions.
While the algorithm attempts to preserve precision, doing so during transformations is hard. This is because we only have access to a membership oracle, and it is impossible to guarantee that precision is preserved without an infinite number of queries in general. Hence, GLADE uses a series of heuristic checks to ensure that the candidate is potentially precision preserving.
The GLADE algorithm has two main phases.
### 3.1 Phase I: Regular Expression Synthesis
In the first phase, the idea is to synthesize a representative regular expression. The algorithm first attempts to generalize substrings of the seed as repetition (rep) or alternation (alt).
The seed input $a_{in}$ is first annotated as $[a_{in}]_{rep}$. Then the following rules are followed to successively generalize the internal substrings.
- Given any partly annotated string $P[a]_{rep}Q$ such that $P$ is the non annotated prefix, $Q$ is the non annotated suffix, and the in between string $a$ is annotated with rep, we first find all decompositions of $a$ of the form $a_1 a_2 a_3$ where $a_2 \neq \epsilon$. We then generate annotated strings $Pa_1([a_2]_{alt})[a_3]_{rep}Q$ for every such decomposition of $a$. These along with the string $P\alpha Q$ becomes candidates for generalization.
Next, for any annotated string \( P(\alpha_{1}, \alpha_{2}, Q) \), for any decomposition of \( \alpha \) of the form \( \alpha_{1}, \alpha_{2} \), where neither of the string is empty, generate \( P(\alpha_{1}, \alpha_{2}, Q) \). These along with \( \alpha \) becomes generalization candidates. Shorter \( \alpha_{1} \) constructions are preferred for generalization, followed by longer \( \alpha_{2} \) for repetitions. For alternations, shorter \( \alpha_{1} \) constructions are preferred. The construction \( \alpha \) is last.
3.2 Phase II: Infer Recursive Properties
The idea here is to infer recursive properties and transform the expression into a context-free grammar. The regular expression that was synthesized in Phase I is first translated into a context-free grammar. Next, each pair of nonterminals that were synthesized during Phase I corresponding to repetition is equated and checked whether the resulting language represents a valid generalization.
4 Evaluation
For evaluation, we wanted to ensure that the procedure we followed was the same as Bastani et al. except for the new implementation, and using the same grammar for precision and recall. Hence, the first set of subjects are the original four programs used by Bastani et al. for evaluation: URL, Lisp, Grep, and XML. Out of these, we used the URL, and Grep handwritten grammars as given by Bastani et al.\(^2\). We tried to check the accuracy, but could only evaluate that of the handwritten Grep grammar as this was the only binary available\(^3\). The XML and Lisp grammars were written as Java programs\(^4\). Since these are well known standard formats, we used external grammars for these.
Next, we wanted to extend our evaluation to a few simpler grammars so that we can understand the characteristics of the algorithm in detail. Hence, the second set includes a few simple grammars: Ints, Decimals, Floats, and JSON.
We then investigated GLADE behavior on a few parenthesis variants: Palindrome, Paren, Bool Add, TwoParen, TwoParenD, TwoAnyParenD, BinParen, and BinAnyParen.
Finally, we wanted to find the performance of GLADE on real-world complex grammars. Hence, the third set contained ANTLR grammars obtained from the ANTLR repository [3]: Lua, MySQL, Pascal, XPath, C, TinyC, Tiny, and Basic.
For the first, second, and third set of grammars, we produced 50 random inputs using the F1 fuzzer [13]. The random exploration depth was set to 100. For ANTLR grammars, the GLADE algorithm took an extremely large amount of time to learn. Hence, we limited both the seed set and the individual size. That is, for these, we only generated 10 seed inputs with a maximum random exploration depth of 20.
Note that GLADE claims not to require seed inputs that exercise all interesting behaviors. For ANTLR grammars, we used Grammarinator [15] as the input generator.
We use the same definitions of precision, recall, and the F1 score. We generate 1000 inputs from the synthesized grammar and check how many of them were recognized by the handwritten grammar for precision (P), and we generate 1000 inputs from the handwritten grammar and check how many were recognized by the synthesized grammar for recall (R). The F1 score is calculated as \( F1 = \frac{2PR}{P+R} \).
During the check for accuracy, we produced inputs from the Grep handwritten grammar and checked how many of these were accepted by the Grep binary. We found that only 33% of inputs were accepted. Given that the Grep grammar is far from the actual input grammar of Grep binary, we use Grep from now on to indicate that it is not the true grammar. Similarly, GLADE-I uses a relaxed definition for XML, allowing any number of root elements compared to the XML specification as claimed in the GLADE paper [6], since XML is not the true XML grammar, we use XML to indicate that it is not the true grammar either. Unlike Grep, however, we also check whether GLADE can learn the XML grammar with the single root node constraint. We mark this grammar as XML.
We first used GLADE-II to synthesize grammars corresponding to each of these grammars. The details of the GLADE-II run are given in Table 1. LTime measures the total amount of time in seconds of learning a grammar. Seeds Len refers to the average length of the random seeds used in learning a grammar. \( \sigma \) refers to the standard deviation of the seed lengths. Checks lists the number of checks the algorithm performed in the learning process.
Table 2 contains the precision, accuracy, and the corresponding F1 score obtained by GLADE-II on each subject. It also shows the size of the synthesized grammars in Kilobytes. A few cells are empty (\( \_ \)) because the score is unavailable. This is due to the large size of these synthesized grammars which made parsing infeasible in a reasonable amount of time and memory.
Table 3 describes the statistics of handwritten grammars\(^5\). These were used as the black-box program (grammar+parser) whose input grammar Glade-II was expected to learn.
Table 4 describes the synthesized grammar statistics. The synthesized grammars are much larger and more complex than the actual input grammars of the black-box programs. The GLADE paper [6] does not report grammar sizes.
Why did we not evaluate our subjects with GLADE-I? As we hinted in the introduction, the GLADE-I source is exceedingly entangled, and it is hard to include new programs for evaluation easily.
---
\(^2\) https://github.com/obastani/glade-full/blob/master/data/handwritten/
\(^3\) https://github.com/obastani/glade-full/blob/master/data/handwritten/SyntheticGrammars.java
\(^4\) https://github.com/obastani/glade-full/blob/master/src/glade/constants/SyntheticGrammars.java
\(^5\) Only ASCII symbols were considered.
### Table 1. Glade-II Execution
<table>
<thead>
<tr>
<th>Grammar</th>
<th>LTime (s)</th>
<th>Seeds Len</th>
<th>$\sigma$</th>
<th>Checks</th>
</tr>
</thead>
<tbody>
<tr>
<td>URL</td>
<td>193</td>
<td>13.64</td>
<td>3.66</td>
<td>91,885</td>
</tr>
<tr>
<td>Lisp</td>
<td>1,463</td>
<td>13.93</td>
<td>13.20</td>
<td>81,769</td>
</tr>
<tr>
<td><strong>XML</strong></td>
<td>5,840</td>
<td>16.44</td>
<td>13.15</td>
<td>129,362</td>
</tr>
<tr>
<td>XML</td>
<td>618</td>
<td>15.22</td>
<td>9.64</td>
<td>73,272</td>
</tr>
<tr>
<td><strong>Grep</strong></td>
<td>1,891</td>
<td>20.24</td>
<td>14.88</td>
<td>99,843</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Grammar</th>
<th>LTime (s)</th>
<th>Seeds Len</th>
<th>$\sigma$</th>
<th>Checks</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ints</td>
<td>1</td>
<td>2.08</td>
<td>1.18</td>
<td>3,216</td>
</tr>
<tr>
<td>Decimals</td>
<td>15</td>
<td>3.72</td>
<td>2.20</td>
<td>21,292</td>
</tr>
<tr>
<td>Floats</td>
<td>16</td>
<td>5</td>
<td>2.45</td>
<td>22,827</td>
</tr>
<tr>
<td>JSON</td>
<td>7,398</td>
<td>24.04</td>
<td>45.78</td>
<td>172,163</td>
</tr>
</tbody>
</table>
### Table 2. Glade-II Scores
<table>
<thead>
<tr>
<th>Grammar</th>
<th>Precision</th>
<th>Recall</th>
<th>F1</th>
<th>Size (KB)</th>
</tr>
</thead>
<tbody>
<tr>
<td>URL</td>
<td>0.687</td>
<td>0.81</td>
<td>0.796</td>
<td>296</td>
</tr>
<tr>
<td>Lisp</td>
<td>0.378</td>
<td>0.55</td>
<td>0.638</td>
<td></td>
</tr>
<tr>
<td><strong>XML</strong></td>
<td>0.55</td>
<td>0.96</td>
<td>0.976</td>
<td></td>
</tr>
<tr>
<td>XML</td>
<td>0.579</td>
<td>0.759</td>
<td>0.66</td>
<td>635</td>
</tr>
<tr>
<td><strong>Grep</strong></td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>2,000</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Grammar</th>
<th>Precision</th>
<th>Recall</th>
<th>F1</th>
<th>Size (KB)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ints</td>
<td>0.983</td>
<td>0.99</td>
<td>1</td>
<td>14</td>
</tr>
<tr>
<td>Decimals</td>
<td>0.848</td>
<td>0.92</td>
<td>1</td>
<td>74</td>
</tr>
<tr>
<td>Floats</td>
<td>0.914</td>
<td>0.984</td>
<td>0.95</td>
<td>71</td>
</tr>
<tr>
<td>JSON</td>
<td>0.531</td>
<td>0.797</td>
<td>0.64</td>
<td>594</td>
</tr>
</tbody>
</table>
### Table 3. Source Grammars
<table>
<thead>
<tr>
<th>Grammar</th>
<th>Non-terminals</th>
<th>Rules</th>
<th>Terminals</th>
</tr>
</thead>
<tbody>
<tr>
<td>URL</td>
<td>13</td>
<td>119</td>
<td>73</td>
</tr>
<tr>
<td>Lisp</td>
<td>12</td>
<td>78</td>
<td>63</td>
</tr>
<tr>
<td><strong>XML</strong></td>
<td>13</td>
<td>142</td>
<td>65</td>
</tr>
<tr>
<td>XML</td>
<td>12</td>
<td>140</td>
<td>65</td>
</tr>
<tr>
<td><strong>Grep</strong></td>
<td>12</td>
<td>155</td>
<td>91</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Grammar</th>
<th>Non-terminals</th>
<th>Rules</th>
<th>Terminals</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ints</td>
<td>5</td>
<td>16</td>
<td>10</td>
</tr>
<tr>
<td>Decimals</td>
<td>7</td>
<td>19</td>
<td>11</td>
</tr>
<tr>
<td>Floats</td>
<td>10</td>
<td>27</td>
<td>14</td>
</tr>
<tr>
<td>JSON</td>
<td>27</td>
<td>159</td>
<td>101</td>
</tr>
</tbody>
</table>
### Table 4. Synthesized GLADE-II Grammars
<table>
<thead>
<tr>
<th>Grammar</th>
<th>Non-terminals</th>
<th>Rules</th>
<th>Terminals</th>
</tr>
</thead>
<tbody>
<tr>
<td>URL</td>
<td>436</td>
<td>7,604</td>
<td>78</td>
</tr>
<tr>
<td>Lisp</td>
<td>923</td>
<td>15,928</td>
<td>63</td>
</tr>
<tr>
<td><strong>XML</strong></td>
<td>1,086</td>
<td>24,938</td>
<td>65</td>
</tr>
<tr>
<td>XML</td>
<td>693</td>
<td>16,282</td>
<td>69</td>
</tr>
<tr>
<td><strong>Grep</strong></td>
<td>993</td>
<td>54,756</td>
<td>91</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Grammar</th>
<th>Non-terminals</th>
<th>Rules</th>
<th>Terminals</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ints</td>
<td>49</td>
<td>297</td>
<td>10</td>
</tr>
<tr>
<td>Decimals</td>
<td>194</td>
<td>1,252</td>
<td>19</td>
</tr>
<tr>
<td>Floats</td>
<td>260</td>
<td>1,524</td>
<td>14</td>
</tr>
<tr>
<td>JSON</td>
<td>1,418</td>
<td>10,727</td>
<td>114</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Grammar</th>
<th>Non-terminals</th>
<th>Rules</th>
<th>Terminals</th>
</tr>
</thead>
<tbody>
<tr>
<td>Palindrome</td>
<td>46</td>
<td>89</td>
<td>44</td>
</tr>
<tr>
<td>Paren</td>
<td>1,635</td>
<td>3,549</td>
<td>3</td>
</tr>
<tr>
<td>Bool Add</td>
<td>1,097</td>
<td>2,224</td>
<td>9</td>
</tr>
<tr>
<td>TwoParenD</td>
<td>842</td>
<td>1,821</td>
<td>5</td>
</tr>
<tr>
<td>TwoAnyParenD</td>
<td>830</td>
<td>1,554</td>
<td>106</td>
</tr>
<tr>
<td>BinParen</td>
<td>1,188</td>
<td>2,625</td>
<td>12</td>
</tr>
<tr>
<td>BinAnyParen</td>
<td>3,514</td>
<td>7,260</td>
<td>120</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Grammar</th>
<th>Non-terminals</th>
<th>Rules</th>
<th>Terminals</th>
</tr>
</thead>
<tbody>
<tr>
<td>Tiny</td>
<td>571</td>
<td>6,652</td>
<td>75</td>
</tr>
<tr>
<td>Lua</td>
<td>1,723</td>
<td>33,647</td>
<td>139</td>
</tr>
<tr>
<td>Pascal</td>
<td>2,975</td>
<td>41,648</td>
<td>134</td>
</tr>
<tr>
<td>MySQL</td>
<td>1,478</td>
<td>120,213</td>
<td>137</td>
</tr>
<tr>
<td>XPath</td>
<td>1,180</td>
<td>46,760</td>
<td>136</td>
</tr>
<tr>
<td>C</td>
<td>1,153</td>
<td>102,499</td>
<td>127</td>
</tr>
<tr>
<td>TinyC</td>
<td>2,294</td>
<td>30,126</td>
<td>162</td>
</tr>
<tr>
<td>Basic</td>
<td>1,298</td>
<td>13,683</td>
<td>135</td>
</tr>
</tbody>
</table>
The experiments were done on a machine with 8 Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz CPUs, with a memory of 16GB. The operating system was Ubuntu.
5 Discussion
There are several limitations with the GLADE paper.
5.1 Dependence on Seeds
In its discussion of relevant research, the GLADE paper [6] claims that some of the other grammar inference techniques rely on positive examples that exercise all interesting behaviors. One can wonder whether this implies that the seeds required by GLADE need not cover all interesting behaviors. In our evaluation, the performance of GLADE is strongly dependent on the features covered by seed inputs. A smaller number of seeds results in lower recall (less variety) but higher precision (less chance of making mistakes).
5.2 Reporting of Results
The F1 score as given in Figure 4 is never explicitly specified in figures. The most important information—precision and recall of the synthesized grammars—are never reported separate from F1.
5.3 Evaluation Results
The F1 score we obtain that is listed in Table 2 is much lower than expected from the GLADE paper. The comparison of scores for the original four programs—URL, Lisp, XML, and Grep—is reported in Table 5. As we mentioned previously, the evaluation of Grep by the GLADE paper is unreliable. The problem is that it uses a handwritten grammar for computing precision, and this handwritten grammar for Grep accepts a much larger language than the actual Grep program. Hence, the reported F1 score is highly inflated.
The precision scores for Decimals (0.85), and JSON (0.53), and Tiny (0.21) are much lower than expected. These grammars were not part of the original GLADE paper but were added to investigate the capabilities of GLADE. While investigating the reason, we found that the merge strategy of GLADE fails to preserve the precision in certain instances. We also found that the number of checks used by GLADE is often insufficient to correctly identify repetition generalization. However, increasing the number of such checks will adversely impact the speed of grammar learning.
5.4 Practicality of the Inferred Grammars
One of the strong claims of GLADE is that the recovered grammar can be immediately used for fuzzing. However, we found that the size of the grammar generated is extremely large. For example, learning Grep resulted in a 2 MB grammar. The problem with such large grammars is that it is essentially enumerative. It cannot be feasibly used for parsing existing seed files as the parsers we tried to use gave up on such large grammars. Even grammar-based generators tend to have trouble using such large grammars. This is especially noticeable when considering the original grammars. For example, Palindrome resulted in a 13 KB grammar, while the actual grammar contains a single nonterminal and five rules.
The biggest surprise comes from the parenthesis languages. These are trivial languages with less than five nonterminal symbols. It should be trivial for GLADE to recover their grammar. However, GLADE fares poorly in most both in terms of accuracy (F1) and on the size of the grammars recovered, thousands of nonterminal symbols, and hundreds of kilobytes in size. On inspection, the GLADE recovered grammar was strongly enumerating rather than abstracting.
5.5 Insights about the GLADE Algorithm
During our implementation of GLADE algorithm and the subsequent evaluation, we found a number of insights about the GLADE algorithm, and why it has problems with some of the languages. These we describe in detail below.
1. The GLADE algorithm cannot learn a valid XML representation. Even in the paper [6], the regular expression synthesized – (a*(a*/<a/>)* – does not always produce valid XML inputs as it lacks a root element.⁶
2. The heuristic checks specified by Bastani et al. is insufficient even for XML which drove their design [6, Section 8]. For example, given a seed <ab>, the first generalization is <a*b/> and the second generalization is <a*b*/>. The second generalization is imprecise because we can now construct </>. However, it is accepted because the two required checks (<a/> and <abb/>/>) pass [6, Section 4.3 Check Construction].
3. Merging can incur loss of precision. Consider, for example, TwoAnyParenD. We start with a seed input [()]*, which is generalized to [()]*. During Phase II, ()* and 1* are hence checked for unification. To verify the new generalization, GLADE constructs two checks [6, Section 5.3 Check Construction] – [11]
Table 5. GLADE Learning Accuracy (F1 Score)
<table>
<thead>
<tr>
<th>Grammar</th>
<th>Language</th>
<th>GLADE-I F1</th>
<th>GLADE-II F1</th>
</tr>
</thead>
<tbody>
<tr>
<td>URL</td>
<td>Regular</td>
<td>0.92</td>
<td>0.81</td>
</tr>
<tr>
<td>Lisp</td>
<td>Context-Free</td>
<td>0.97</td>
<td>0.55</td>
</tr>
<tr>
<td>XML</td>
<td>Context-Free</td>
<td>0.98</td>
<td>0.7</td>
</tr>
<tr>
<td>XML</td>
<td>Context-Free</td>
<td>–</td>
<td>0.66</td>
</tr>
<tr>
<td>Grep</td>
<td>Regular</td>
<td>0.93</td>
<td>1.00</td>
</tr>
</tbody>
</table>
⁶As the actual F1 score was not reported by Bastani et al. [6], we estimated it from the graph [6, Figure 4(b)].
⁷The handwritten grammar used for computing precision is much more permissive than the actual Grep grammar. Hence, the high F1 score.
⁸Bastani et al. [6, Section 7 Limitations] incorrectly claims that it is valid XML subset.
and [ ( ) ( ) ]. Since both are valid, the new generalization is accepted. However, the resulting grammar can now produce [ ( ) ] which is invalid.
4. In Phase II, only repetitions are considered for unification. These are, however, insufficient in many cases. Consider XML, and seed input `<b>`<hi>`</b>`. GLADE never learns about `<b>`<b>`<hi>`</b>` because `<b>`<hi>`</b>` is not a repetition. We found the same issue in Palindrome where the grammar is exclusively made up of concrete enumerations.
5. The character generalization [6, Section 5.3 Check Construction] can produce generalizations that do not preserve precision. Consider the Grep grammar. We use `a` as a seed input. GLADE now constructs the check – `[ ]` – which passes, producing `[ ]a` as a generalization for the first index. Next, GLADE constructs the check – `aa` —which passes, resulting in `[ ]a` as a generalization for the second index, and the new generalized language `[a][1][1]a`. However, this language loses precision because it can produce `[a` which is invalid.
6. Threats to Validity
We acknowledge that our evaluation of the GLADE algorithm is subject to the following threats.
**Defects in the implementation.** One of the largest threats to our evaluation is the possibility that (1) we misunderstood some parts of the GLADE paper and/or (2) we implemented the algorithm incorrectly. Given that this is a software program, this is a possibility that cannot be completely mitigated. We have tried to reduce the possible bugs as much as possible by carefully documenting our code, reviewing our code multiple times, investigating how simple grammars that exercised each feature of the GLADE algorithm behaved, and investigating a sample of inputs that reduced the precision or recall of the synthesized grammar.
**GLADE algorithm vs. tool.** For investigating GLADE, we also considered the GLADE tool supplied for replication. However, the GLADE tool does not support extraction or inspection of the inferred grammars, and we found it prohibitively hard to extract the GLADE algorithm code, as it is deep entangled with code for custom serialization and custom input generation provided with GLADE. However, we note that our implementation achieves better F1 scores than Kulkarni et al. [18] (who used the GLADE tool) for JSON (0.64 > 0.59), XML (0.66 > 0.42), and Lisp (0.55 > 0.38). While the evaluation of Kulkarni achieved higher precision for TinyC (0.47 < 0.60), we note that the recall which is an indication of the amount of actual abstraction is surprisingly low (0.17).
**Defects in the input generator.** Another threat is that (1) our input generator is faulty (generates invalid strings) (2) or that it is biased (generates a skewed distribution of inputs). We have tried to mitigate it by using off-the-shelf fuzzers such as the F1 fuzzer [13] and Grammarinator [15]. Further, we have checked that the strings that the fuzzers generate are parsed by the same grammar.
**Defects in the parser.** Another threat is the possibility that our parser may be defective, rejecting valid inputs or accepting invalid inputs. We have mitigated this by using an off-the-shelf well tested textbook parser [22].
7. Questions and Answers
Given that this is a replication study, questions may arise about our procedure. Let us address the most important ones.
1) **Why do we not use GLADE-I [2]??**
Our focus is to replicate the GLADE paper, not its implementation, as we see the paper as the definitive, archived, and cited reference. ACM defines [1] replicability as: “For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.” Hence, we independently implemented the same algorithm. We note that our effort was prompted by the discovery that some of the GLADE F1 scores failed the reproducibility test for XML and Lisp when attempted by other researchers [18] using the same artifacts from GLADE-I (Section 6).
2) **Is this a complete replication of the GLADE paper?**
We attempt to replicate only what we consider to be the main result of the paper, which is the high accuracy achieved in F1 scores while learning different grammars. In particular, we do not replicate the evaluation of L-Star or RPNI. Secondly, we do not replicate the fuzzing experiments.
3) **How do we know that the language a^n b^n could not be learned by GLADE?**
Note that a^n b^n can equivalently be defined as:
$$\langle S \rangle ::='( ' \langle S \rangle ' )' | \epsilon$$
We evaluated Palindrome (Figure 1) which is a trivial extension of the language a^n b^n. Palindrome is defined as:
$$\langle S \rangle ::='( ' \langle S \rangle ' )' | '[ ' \langle S \rangle ' ]' | ' ' \langle S \rangle ' ' | \epsilon$$
That is, Palindrome contains three pairs of parenthesis rather just one pair. We analyzed different variations of the same language with different pairs, and found that the particular pattern—nesting nonterminals without repetition—is not learnable by GLADE.
4) **How representative of real-world grammars are JSON and Decimals?**
We argue that both grammars are representative and simple: **Representativeness.** Decimals numbers are a common component in almost all programming languages. JSON is
one of the most popular data-interchange formats, similar to XML and Lisp S-Expressions. Hence, we believe that both the Decimals and JSON grammars are representative of the real world.
**Simplicity.** The Decimals grammar is a regular grammar containing only 19 rules, and 7 nonterminals (Table 3). JSON is an LL(1) grammar that is a heavily reduced subset of the actual JSON specification. It contains only 27 nonterminals and 159 rules. We believe that regular grammars containing 11 rules should be considered simple by any definition, and LL(1) grammars are one of the simplest grammar classes under the Context-Free Grammar umbrella.
5) **Does your implementation of GLADE include the optimizations from original GLADE implementation?**
The only optimization mentioned in the GLADE paper [6] is multiple-inputs optimization, which enables GLADE to learn from multiple seed inputs. We have implemented that.
6) **Is there a potential for non-determinism in the GLADE learning?**
As far as we are aware, there is no potential for non-determinism in the GLADE implementation. We contacted the GLADE authors regarding our implementation. The only advice was to be careful about the order in which the alternatives were tried. We followed their advice and have implemented it exactly as the paper mentions. If there are any avenues of non-determinism influencing the grammar learning by GLADE, the GLADE paper does not mention it.
7) **Why do you not evaluate Learning Highly Recursive Input Grammars [18] as well?**
Our focus is on replication of GLADE. We do not claim that our research is much more than that. Hence, evaluation of Learning Highly Recursive Input Grammars is out of scope for this study.
8) **How dependent is GLADE on the seed selection?**
The GLADE paper uses ambiguous language in this regard. It says that it can produce the context-free grammar even if the positive examples given do not cover all "interesting behaviors". However, the paper does not provide a definition of behavior. Hence, we would not know how to validate (or invalidate) this claim.
9) **What is going wrong with GLADE and how can it be addressed?**
This paper is a pure replication study of the GLADE paper. Hence, a detailed analysis of what is going wrong with GLADE, and how to overcome it is out of scope for this study.
8) **Conclusion**
Recovering input grammars for existing programs is an important, yet challenging problem. The GLADE algorithm by Bastani et al. is the first published approach that is set to recover general context-free grammars using membership queries alone. Having reimplemented the GLADE algorithm, we find that the accuracy of the inferred context-free grammars is much lower than originally reported, a discrepancy recently also reported for the original GLADE tool [18]. Our investigation details more issues with the GLADE algorithm; notably, we show that its inferred grammars can be extremely large and enumerative, indicating low usability for practical tasks such as parsing or producing inputs with general fuzzers. Prospective users should also evaluate other grammar mining approaches, such as the “blackbox” and “whitebox” approaches listed in Section 2.
Should the GLADE issues have been caught by the PLDI 2017 reviewers? In total, replicating and evaluating GLADE took us more than six person-months; we cannot expect from reviewers to spend all this time checking a paper. We hope, however, that future authors search for and report weaknesses just as they do for strengths, and that future reviewers appreciate honesty just as they appreciate success.
Replication studies are still rare in our field. Indeed, it is much more work to replicate a piece of research, especially from a paper, than to implement a new alternative from scratch (for which one may also get more credits). That extra effort comes from the required quality assurance: Does the reimplemention really exactly reflect the algorithm(s) as stated in the paper? Of course, such quality assurance would be expected from any piece of research; yet, it is the authors of the replication study that would be challenged with such questions, not so much the authors of the original paper. As a community, we need to further encourage replication and reuse of research results—by making tools and data available, usable, understandable, and extensible. Such standards must become the norm, not the exception.
Our annotated reimplementation GLADE-II and all experimental data is available at: https://doi.org/10.5281/zenodo.6326396
A) **Appendix**
\[
S ::= (\langle S \rangle ') | (S) | (\langle S \rangle ') \rightarrow S' | (' S ') | \epsilon
\]
*Figure 1*. Palindrome
\[
S ::= (PS) \\
PS ::= (P)(PS) | (P) \\
P ::= (\langle PS \rangle ') | (')
\]
*Figure 2*. Paren
\[
\langle S \rangle ::= \langle S \rangle ' + ' \langle S \rangle | ' ( ' \langle S \rangle ) ' | \langle D \rangle \\
\langle D \rangle ::= 1 | 0
\]
Figure 3. Bool Add
\[
\langle S \rangle ::= ' ( ' \langle S \rangle ) ' \langle S \rangle | \epsilon
\]
Figure 4. TwoParen
\[
\langle S \rangle ::= ' ( ' \langle S \rangle ) ' \langle S \rangle | \langle D \rangle \\
\langle D \rangle ::= 1 | 1 \langle D \rangle
\]
Figure 5. TwoParenD
\[
\langle S \rangle ::= \langle D \rangle \\
| ' ( ' \langle S \rangle ) ' \langle S \rangle \\
| ' [ ' \langle S \rangle ' ] ' \\
| ' \{ ' \langle S \rangle ' \} ' \\
\langle D \rangle ::= \epsilon | 1 | 1 \langle D \rangle
\]
Figure 6. TwoAnyParenD
References
\[
\langle START \rangle ::= \langle DECNUM \rangle \\
\langle DECNUM \rangle ::= \langle INT \rangle \ ' + ' \langle DEC \rangle \\
\langle DEC \rangle ::= \langle DigitZs \rangle \langle DigitNZ \rangle | ' 0 ' \\
\langle INT \rangle ::= \langle DigitNZ \rangle \langle DigitZs \rangle | ' 0 ' \\
\langle DigitZs \rangle ::= \epsilon | \langle DigitZ \rangle \langle DigitZs \rangle \\
\langle DigitZ \rangle ::= ' 0 ' | \langle DigitNZ \rangle \\
\langle DigitNZ \rangle ::= [1-9]
\]
Figure 10. Decimal grammar
\[
\langle START \rangle ::= \langle FLOAT \rangle \\
\langle FLOAT \rangle ::= \langle INT \rangle \ ' + ' \langle EXT \rangle | \ ' + ' \langle EXT \rangle | \langle INT \rangle \ ' - ' \\
\langle EXT \rangle ::= \langle DEC \rangle | \langle DEC \rangle \langle LETTER \rangle \langle OP \rangle \langle INT \rangle | \langle DEC \rangle \langle LETTER \rangle \langle INT \rangle \\
\langle DEC \rangle ::= \langle DigitZs \rangle \langle DigitNZ \rangle | ' 0 ' \\
\langle INT \rangle ::= \langle DigitNZ \rangle \langle DigitZs \rangle | ' 0 ' \\
\langle OP \rangle ::= ' - ' \\
\langle LETTER \rangle ::= ' e ' | ' E ' \\
\langle DigitZs \rangle ::= \epsilon | \langle DigitZ \rangle \langle DigitZs \rangle \\
\langle DigitNZ \rangle ::= [1-9] \\
\langle DigitZ \rangle ::= ' 0 ' | \langle DigitNZ \rangle
\]
Figure 11. Float grammar
|
{"Source-Url": "https://rahul.gopinath.org/resources/pldi2022/bendrissou2022synthesizing.pdf", "len_cl100k_base": 10144, "olmocr-version": "0.1.49", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 35683, "total-output-tokens": 11922, "length": "2e13", "weborganizer": {"__label__adult": 0.00040841102600097656, "__label__art_design": 0.0003101825714111328, "__label__crime_law": 0.00039267539978027344, "__label__education_jobs": 0.0008997917175292969, "__label__entertainment": 7.730722427368164e-05, "__label__fashion_beauty": 0.0001913309097290039, "__label__finance_business": 0.0001747608184814453, "__label__food_dining": 0.0003230571746826172, "__label__games": 0.0005598068237304688, "__label__hardware": 0.0007276535034179688, "__label__health": 0.0005393028259277344, "__label__history": 0.0002186298370361328, "__label__home_hobbies": 9.244680404663086e-05, "__label__industrial": 0.00033783912658691406, "__label__literature": 0.0004172325134277344, "__label__politics": 0.0002689361572265625, "__label__religion": 0.00044465065002441406, "__label__science_tech": 0.022857666015625, "__label__social_life": 0.00011742115020751952, "__label__software": 0.0052947998046875, "__label__software_dev": 0.96435546875, "__label__sports_fitness": 0.00030040740966796875, "__label__transportation": 0.00041413307189941406, "__label__travel": 0.00017333030700683594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41899, 0.0616]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41899, 0.367]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41899, 0.85484]], "google_gemma-3-12b-it_contains_pii": [[0, 5026, false], [5026, 10405, null], [10405, 16164, null], [16164, 19947, null], [19947, 25186, null], [25186, 30482, null], [30482, 35276, null], [35276, 38983, null], [38983, 41899, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5026, true], [5026, 10405, null], [10405, 16164, null], [16164, 19947, null], [19947, 25186, null], [25186, 30482, null], [30482, 35276, null], [35276, 38983, null], [38983, 41899, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41899, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41899, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41899, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41899, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41899, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41899, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41899, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41899, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41899, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41899, null]], "pdf_page_numbers": [[0, 5026, 1], [5026, 10405, 2], [10405, 16164, 3], [16164, 19947, 4], [19947, 25186, 5], [25186, 30482, 6], [30482, 35276, 7], [35276, 38983, 8], [38983, 41899, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41899, 0.25574]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
1b123db698db8305954ad121c8666e907566f325
|
[REMOVED]
|
{"Source-Url": "http://www.dmi.unict.it/~faro/papers/conference/faro25.pdf", "len_cl100k_base": 9359, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 47519, "total-output-tokens": 10166, "length": "2e13", "weborganizer": {"__label__adult": 0.00048732757568359375, "__label__art_design": 0.0006594657897949219, "__label__crime_law": 0.0006327629089355469, "__label__education_jobs": 0.0007915496826171875, "__label__entertainment": 0.00020420551300048828, "__label__fashion_beauty": 0.0002663135528564453, "__label__finance_business": 0.0003767013549804687, "__label__food_dining": 0.0005083084106445312, "__label__games": 0.0010662078857421875, "__label__hardware": 0.0020961761474609375, "__label__health": 0.0009908676147460938, "__label__history": 0.00054168701171875, "__label__home_hobbies": 0.00014328956604003906, "__label__industrial": 0.0008487701416015625, "__label__literature": 0.0008749961853027344, "__label__politics": 0.0005297660827636719, "__label__religion": 0.001049041748046875, "__label__science_tech": 0.423828125, "__label__social_life": 0.00012421607971191406, "__label__software": 0.01226043701171875, "__label__software_dev": 0.55029296875, "__label__sports_fitness": 0.0003807544708251953, "__label__transportation": 0.0007500648498535156, "__label__travel": 0.0002505779266357422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27774, 0.09693]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27774, 0.45273]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27774, 0.80156]], "google_gemma-3-12b-it_contains_pii": [[0, 2874, false], [2874, 5999, null], [5999, 9290, null], [9290, 12331, null], [12331, 15767, null], [15767, 17714, null], [17714, 20584, null], [20584, 20662, null], [20662, 24479, null], [24479, 26499, null], [26499, 27774, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2874, true], [2874, 5999, null], [5999, 9290, null], [9290, 12331, null], [12331, 15767, null], [15767, 17714, null], [17714, 20584, null], [20584, 20662, null], [20662, 24479, null], [24479, 26499, null], [26499, 27774, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27774, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27774, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27774, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27774, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27774, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27774, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27774, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27774, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27774, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27774, null]], "pdf_page_numbers": [[0, 2874, 1], [2874, 5999, 2], [5999, 9290, 3], [9290, 12331, 4], [12331, 15767, 5], [15767, 17714, 6], [17714, 20584, 7], [20584, 20662, 8], [20662, 24479, 9], [24479, 26499, 10], [26499, 27774, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27774, 0.21154]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
851238b153a287d6574497195f8e7c62d593e48b
|
A Comparative Study of Security Techniques for Protecting Mobile Agents from Malicious Hosts
Anupam Jain1, Kunwar Singh Vaisla2 and Zia Saquib3
1M.Tech CSE, Department of Computer Science and Engineering, BTKIT Dvarahat, Almora, Uttarakhand, India
2Associate Professor, Department of Computer Science and Engineering, BTKIT Dvarahat, Almora, Uttarakhand, India
3Executive Director, Centre for Development of Advanced Computing, Mumbai, India
e-mail: 1anupamjain2198@gmail.com, 2vaislaks@rediffmail.com, 3zsaquib@gmail.com
Abstract—In the modern world, mobile agent technology offers new innovating paradigm in which agent with the capability to migrate from one host to another where it can resume its execution. Mobile agent is the incarnation of highly organized software with embedded intelligence. Mobile agents are gaining the researcher’s attention due to its valuable features. Despite a number of successful mobile agent application still there are some barriers preventing this technology from spreading out to wider range of enterprise and individual user. Security issues have an important role in the development of secure and anti-tamper mobile agent system. This paper provides an overview of range of measures for countering the identified threats and fulfilling these security objectives.
Keywords: Mobile agent, Black box, Host, Mobility
I. INTRODUCTION
Mobile agent paradigm has many advantages over the traditional network computing models like reduce network traffic, overcome network latency. As the sophistication of mobile agent has increased over time, it has being the tremendous threats to security. Security aspects in the mobile agent system can be classified as the following four main categories: Agent to host, agent to agent, host to agent, and others to agent/host.
The issue of security hosts from malicious agents has been widely investigated and researched. Many mechanisms have been developed to protect the host from hostile agent. The mobile agent security against hostile hosts is one of the most crucial subjects in mobile agent technology. Sander and Tschudin present two types of security problems that must be solved [1]. The first is host protection against tampering agents. The second is agent protection against hostile hosts. Many mechanisms have been developed for the first kind of problem, such access control, password protection, sandboxes but second one problem is still being a challenging issue for researchers. Mobile agent can be described as follows:
A mobile agent is a software program that migrates from node to node of a heterogeneous network on the user’s behalf to perform the task [1][2]. They consist three parts: code, a data state, and an execution state. Code mobility is the main properties of mobile agent, which have capability to change the bindings between code fragments, and the location where they are executed dynamically [3]. There are two types of mobility level: weak mobility and strong mobility. In the weak mobility, mobile agent carries data state and code and on moving, the execution has to start from the beginning. In the strong mobility, mobile agent carries data state, code and execution state.
While in strong mobility, the execution can continue from the point it stopped on the pervious host. Mobile agents are goal oriented.
The main issue of security is safely executing the code of the mobile agent in the trusted environment [2]. Once the mobile code and data reaches to the host, they are fully under the control of the host mercy. Hosts have the capability to manipulate the code and data. They can brainwash the previous activities or collected information of mobile agent for their personal benefit.
Hostile hosts can do the following attacks:
1. Manipulation of code.
2. Manipulation of data.
3. Manipulation of interaction with other agents or hosts.
4. Eavesdrops on the code and data.
5. Eavesdrops on the execution of code.
6. Erase the previous information collected by agent.
7. Return the incorrect data.
One solution to tackle these security problems is that trusted nodes or hosts can restrict the agents. Limiting an agent to trusted node is limiting the usefulness of mobile agent paradigm. In today’s electronic world, one agent can visit finite number of hosts. In addition, how does an agent determine that a particular node is trusted or not and how will a trusted node will be add in the system. Because mobile agents execute on remote hosts and some experts think that, there is no way to ensure safety of an agent without using tamper-resistant hardware.
There is number of possible software solutions to protect mobile agents from malicious hosts. This paper will explore possible mobile agent applications, some of security problems associated with these applications, and after this, we will discuss some threats in mobile agent system and various mechanisms that have been proposed to protect the mobile agent system.
II. NEED OF SECURITY IN MOBILE AGENT APPLICATIONS
Let us discuss a classical example Fig. 1, which is using the mobile agent technology to determine the best price of airline ticket. In this example, the agent is sent out to find the best possible price for an airline ticket to a specific destination. An agent will be sent to visit each host that offers the airline tickets. The agent will return to the user when all of the possible ticket prices have been evaluated, and it will return with a list of best prices to the agent’s owner [24].
Several types of attacks are possible on the agent searching for airline tickets. If we do not provide any security to agent then possible types of attacks are listed below.
1. A malicious host can change the data that is carried by an agent for example a malicious host could delete airline ticket prices that are cheaper than it can offer, in an attempt to win the agent’s business.
2. If mobile agent has some form of e-cash then a malicious host can steal it.
3. A malicious host could modify the flow control of an agent so that the agent will bypass other hosts with cheaper airline tickets.
4. If agent’s code is in plain text then a malicious host can learn the algorithms used in implementation of agent and make some changes on agent’s code and it will never complete its task.
These types of attacks show the importance of security mechanism in mobile agent paradigm without applying the proper security we can’t use the mobile agent technology in e-commerce applications.
III. POSSIBLE THREATS TO MOBILE AGENT
In mobile agent system a malicious host is describe as a host that executes mobile agent and tries to make any kind of attack. When an agent executes on a host then it uses all the resources from the host and host is capable to monitor memory used by agent and each instructions given by the agent to the host.
Therefore, a malicious host may attempt attacks to agent in number of ways [25]:
- a. A host masquerading as another host.
- b. Denial of service by the host to the agent.
- c. Eavesdropping on an agent’s activity.
- d. Alteration of the agent by the host.
A. Masquerading
In masquerading, a host is try to make agent into believing that it is another host causing and the agent to give the host sensitive information. Once the masquerading host gains the trust on agent it may then be able to read or modify any of agent’s code. The solution to prevent this kind of attack is to use strong authentication protocol to authenticate host from malicious host. In the following figure describes how masquerading works. Here a host name Joe steals the login id and password of another host Sarah and by using these cardinals Joe pretending that he is Sarah. Now he will gain trust on any coming mobile agent that he is Sarah and access all sensitive information.
B. Denial of Services
In this kind of threat, a host may deny a mobile agent from its specific service. This type of attack is possible in two ways:
- 1. A host may deny an agent intentionally and unintentionally from its service and agent is not able to complete its task.
- 2. Another type of attack is host could terminate the agent altogether.
A host can also deny the request generated by an agent, in the case of time sensitive task agent is not able to complete its task in allotted time slot because agent’s request is denied by host.
In the Fig. 2, the attacker is a malicious host and create a lot of zombie request to Victim host(which is not malicious) due to lot of requests victim host starts denies the requests now if an mobile agent will submit a request then victim host will unintentionally deny this request, this kind of threats is called denial of services.
C. Eavesdropping
The next attack that can be performed by a malicious host is eavesdropping. This kind of attack is mostly performed in typical client/server model.
D. Alteration
The last attack that can be performed by a malicious host is alteration of code, data and control of an agent. Malicious host may alter the code of agent by this it will start other task rather than the task assigned by its creator. A host may also try to change the data contained by agent.
IV. SECURITY MECHANISMS TO PROTECT MOBILE AGENTS
A number of mechanisms that have been developed to protect the mobile agents. These mechanisms can be categorized in four types of protection:
1. Mobile agents can only migrate to the trusted nodes/hosts of the system.
2. Organizational methods may be employed to protect agents (i.e. create a close system where only trusted parties can be host.)
3. To insure the integrity of an agent we can use tamper-resistance hardware.
4. We can use cryptographic protocol to ensure the security of mobile agent.
Therefore, we will describe the generic security technologies and research efforts to counter the identified threats occurred due to mobility property of mobile agent.
There are two approaches to protect mobile agent as follows [26]:
1. Detection mechanism: To detect any unauthorized modification of an agent.
2. Prevention mechanism: Use security techniques to prevent the unauthorized access of code and data.
Table 1 shows the some of the countermeasures that have been created to protect mobile agents from malicious host and states whether the mechanism is aimed at detection or prevention.
Some of techniques are kept in prevention category employ methods such as: replication of mobile agents, the use of digital signatures to detect tampering and various cryptographic schemes. The goal of each of these mechanisms is to determine when an agent is attacked. The preventive technique has basic aim to hide data, code and flow control from hosts where they execute. On the other hand, detection techniques try to detect any attack when it will be occurred.
<table>
<thead>
<tr>
<th>Countermeasures</th>
<th>Category</th>
</tr>
</thead>
<tbody>
<tr>
<td>Partial Result Encapsulation</td>
<td>Detection</td>
</tr>
<tr>
<td>Mutual Itinerary Recording</td>
<td>Detection</td>
</tr>
<tr>
<td>Execution Tracing</td>
<td>Detection</td>
</tr>
<tr>
<td>Time Limited Black Box</td>
<td>Prevention</td>
</tr>
<tr>
<td>Computing with Encrypted Functions</td>
<td>Prevention</td>
</tr>
<tr>
<td>Environmental Key Generation</td>
<td>Prevention</td>
</tr>
</tbody>
</table>
E. Partial Result Encapsulation
Partial result encapsulation is detection technique that detects any tamper, which occurs in the result of the agent at different platform or hosts. In this technique, the result of the agent at the each host is encapsulated. The encapsulated result helps in the verification either when the agent returns to the home platform or possibly at intermediate points.
Partial result encapsulation technique uses different cryptographic primitives such as encryption, message authentication code and hash function. Sliding encryption technique is used to provide confidentiality to the result. Here the mobile agent uses the public key of its originator to encrypt the result at each platform and generate the cipher text. Later, when agent returns to its home and it decrypts the cipher text by using corresponding private key there. This technique is valuable where space is important such as smartcards and key size is larger than the gathered data needed to encrypt by the mobile agent [4][5].
Yee proposed Partial Result Authentication codes (PRAC) [6] technique that is used to encapsulate the partial result with Message Authentication Code. The mobile agent and its originator help to generate list of secret keys before it moves to next host in its itinerary. In this PRAC technique, mobile agent uses the particular secret key from list of secret keys for particular host to encapsulate the results. Once encapsulation process is complete at current host, the mobile agent destroys the used key before its migration to the next host. Erasing secret keys before agent move to next host that ensures the previous results are secure. This ensures the forward integrity property, which states that, no future visited host will able to modify the previous results. Since the agent originator remains, contain the list of secret keys and verify the partial results. Therefore, it improves the integrity in mobile agent system [6].
There are some limitations in this approach. The most critical problem occurs when the malicious host exposes secret keys or key generating functions then it can tamper the results without any possibility of detection. Second, it does not secure the code of agent or aspects of mobile agent. The PRAC is oriented toward integrity and not confidentiality; the partial result can be viewed by any platform visited, although this is easily resolved by applying sliding key or other forms of encryption.
F. Mutual Itinerary Recording
In this technique, the path history of mobile agent is maintained to detect tampering attack on the mobile host by the malicious host. Itinerary is tracked by the two cooperating agents [7]. The agent conveys the information about the current platforms, last and next platforms to the cooperating peer through the secure channel. This peer keeps the records of the path and takes the important action when any unfavorable condition occurs. Main logic behind this scheme is based on the assumption that only a few agent platforms are hostile, and even if an agent comes across one, the platform is not likely to collaborate with other malicious platform being visited by the peer. Thus, the
malicious behavior of an agent platform can be detected by dividing up the operations of the application between two agents. There are some drawbacks in this technique. Firstly, it is a costly operation to maintain the authenticated communication channel. Secondly, this technique is not capable to decide which one of the two hosts may be responsible for killing agent.
G. Execution Tracing
Operating systems, scriptable applications, mobile codes all software should be capable to secure them self from malicious code. Execution tracing is a detection technique; this technique is used by all kind of software now going to use this technique to make mobile agents more secure from malicious host. Many researchers have been present many techniques which are used in execution tracing one of them is security automata. Discuss here security automata technique which is widely accepted by many researchers. But many security policies like Chinese wall policy [9, 10, 11], one out of k-authorization policy, and low water mark policy restrict the execution tracing and allow only to trace shallow history of previously grant access events, cause of this we shall also discuss shallow history automata [8] here.
1) Shallow History Tracing
This section includes the Execution tracing policies, security automata and then our final solution cause of many execution policies shallow history automata which is an extension of security automata. The class of to prove to be a proper subset of the class of Execution Tracing security policies, thereby confirming the claim that subclasses of Execution Tracing security policies can be described through the restriction on accessible information. To fix thoughts, the notion of Execution tracing enforceable policies and its categorization via security automata is discussed in C.1.1 and C.1.2 respectively. Shallow history automata are then discussed in section C.1.3.
2) Execution Tracing Security Policies
Let \( \Sigma \) is a countable infinite or finite set of access events. A policy (P) is a set \( P \subseteq \Sigma^* \) is a finite sequence of access events. An execution tracing enforceable policy is a non-empty prefix closed policy, a policy \( P \) is prefix closed policy if it satisfies the condition mentioned below:
\[ \forall u \in \Sigma^* : u \notin P \Rightarrow ( \forall v \in \Sigma^* : uv \notin P) \]
Let a prefix \( \omega \) is the set of all prefixes of \( \omega \) including \( \omega \) itself that is:
\[ \text{Prefix}(\omega) = \{ u \in \Sigma^* \mid \exists v \in \Sigma^* : uv = \omega \} \]
Now easily we can see the following equivalent characterization of prefix closed policies.
\[ \forall \omega \in \Sigma^* : \omega \notin P \Rightarrow \text{prefix}((\omega)) \subseteq P \quad (1) \]
3) Security Automata
Security automata is a variant of Buchi automata is defined [8]. A security automata is a quadruple \( (\Sigma, Q, q_0, \delta) \) where:
- \( \Sigma \) is a finite set of access events.
- \( Q \) is a finite set of states.
- \( q_0 \in Q \) is an initial state.
- \( \delta \) is an transition function maps: \( Q \times \Sigma \rightarrow Q \).
The notion of acceptance in security automata is different from regular finite automata. In regular finite automata final state is explicitly defined but in security automata (SA) final state is not explicitly defined. Security Automata (SA) accepts an access event sequence if transition is defined for every event in the sequence. The notion is mention as follows. Let us consider a given Security Automata (SA)
\[ M = (\Sigma, Q, q_0, \delta) \]
now following notations are defined for
\[ q, q' \in Q, a \in \Sigma, \omega \in \Sigma^* \]
\[ q \xrightarrow{a} M q' \text{ if } (q, a) = q' \]
\[ \Longleftarrow}_{M} q^* \text{ if } \exists \omega \in Q, q = M q^* \Lambda q^* \xrightarrow{a} M q' \]
We can say that Security Automata (SA) \( M \) accepts an access events \( \omega \) if \( q_0 \omega a q \) for some \( q \in Q \). The policy \( P \) is defined as the set of all sequences accepted by \( M \):
\[ \{ \omega \in \Sigma^* \mid \exists q \in Q : q_0 \xrightarrow{a} q \rightarrow_{M} q \} \]
We can easily see that such a set is always prefix-closed and non-empty that is policy \( P \) on security automata \( M \) denoted as \( P(M) \) contains \( e \) and satisfies equation 1. Conversely we can say for any given prefix-closed and non-empty policy \( P \) and there is a security automata \( M \) so that \( P = P(M) \). Now let us consider a security automata to see this is \( (\Sigma, \Sigma^*, e, \delta) \) where \( \delta P(\omega, a) \) is defined to be \( \omega a \) if \( \omega, \omega a \in P \) such a security automata recognize \( P \). Consequently the class of execution tracing enforceable policies coincides with class of policies recognized by the security automata. We will use the above security automata (SA) to recognize \( P \) the canonical security automata for policy \( P \), and denote it by \( SA(P) \) Intuitively, the state of a Security Automata represents the information which execution monitor tracks for execution tracing. It shows the internal data structure which is maintained by the execution monitor across the subsequent access granting decisions. The image of the transition function captures the updating procedure of the internal data structure, while domain of the transition function captures the logic of access granting decisions. Notice that the canonical Security Automata (SA) tracks the all history of previously granted access events.
4) Shallow History Automata
Let us consider the set of all finite subsets of set \( S \) is \( F(S) \) [8]. A finite subset of \( \Sigma \) is the shallow access history or simply shallow history a member of \( F(\Sigma) \). Our main task is to define automata that track only shallow history of previously granted access events. A shallow history automata (SHA) is a special kind of security automata (SA) in the form of,
\[ (\Sigma, F(\Sigma), H_0, \delta) \]
been proved that there is no SHA (Shallow History) amounts to specify its domain as a subset of $F(\sum)$. Let us consider policy (P) that no shallow history automata N is such that $\delta$ already defined at $(H,a)$. That is, the transition function ($\delta$) of shallow history automata is uniquely précised by listing all the points at which it is defined. A policy recognized by some shallow history automata (SHA) is said to be SHA enforceable. Therefore, it has been proved that there is no SHA (Shallow History Automata) is equally expressive than a security automata (SA) at any policy P.
**Theorem:** Fixing the set $\Sigma$ of all possible access events, there is a security automata M and a policy P, so that no shallow history automata N is such that $P(M) = P(N)$.
**Proof:** Let us consider $\Sigma = \{a, b, c, d\}$ and policy P = prefix (abcd) $\cup$ prefix (badc). The policy (P) is prefix closed Q and non-empty and it is recognizable by its canonical security automata. Let us consider policy (P) is recognized by shallow history automata N. Now consider $H_0$ is the initial state of N. The following transitions are valid [8]:
$$
\begin{align*}
H_0 \xrightarrow{a} M\{a\} \cup H_0 & \quad \xrightarrow{b} M\{a,b\} \cup H_0 \\
\quad \xrightarrow{c} M\{a,b,c\} \cup H_0 & \quad \xrightarrow{d} M\{a,b,c,d\} \cup H_0
\end{align*}
$$
However with above transition N also accepts abdc and bacd:
$$
\begin{align*}
H_0 \xrightarrow{a} M\{a\} \cup H_0 & \quad \xrightarrow{b} M\{a,b\} \cup H_0 \\
\quad \xrightarrow{c} M\{a,b,c\} \cup H_0 & \quad \xrightarrow{d} M\{a,b,c,d\} \cup H_0 \\
H_0 \xrightarrow{a} M\{a\} \cup H_0 & \quad \xrightarrow{b} M\{a,b\} \cup H_0 \\
\quad \xrightarrow{c} M\{a,b,c\} \cup H_0 & \quad \xrightarrow{d} M\{a,b,c,d\} \cup H_0
\end{align*}
$$
Now from the contradiction policy P is not shallow history automata (SHA) enforceable. Therefore, it has been proved that there is no SHA (Shallow History Automata) is equally expressive than a security automata (SA) at any policy P.
**H. Time Limited Black Box**
Code obfuscation technique is viable approach to secure the mobile agent code. In this technique, obfuscator transforms the code into more difficult level to understand with identical functionality. It aims to make anti-tamper mobile code which hard to understand or analyze by malicious hosts. There are many useful obfuscating transformations: Layout obfuscation, Data obfuscation, Control obfuscation, preventive obfuscation. First, Layout obfuscation may change in the code like scramble identifiers, change formatting, remove or add comments. Second, Data obfuscation may change in the storage obfuscation and encoding of data, aggregation like modify inheritance relations or split data and ordering. Third, Control obfuscation tries to reorderd statements, loops or expressions and change in computations like extend loops conditions. Last Preventive obfuscation tries to find out weaknesses in current decompiles or deobfuscators and inherent problems with the deobfuscation techniques [13] [14].
Hohl [12] proposed obfuscation technique, time limited black box. The goal of a time-limited black box is to scramble all of the information or code contained in an agent from others. The only information that can be obtained from an agent is the input to the agent and its output. It reflects in Fig. 3. The code and data contained in the agent is obfuscated so that it will take an attacker a long period for understanding the code of the agent. The aim of using obfuscated code is that a host will execute the code and have no idea what the code is actually doing. This technique protects the agent and data within the time interval.
**Fig.3 Time Limited Black Box Property**
An agent is time limited black box if:
1. For a certain fixed known time interval.
2. Data and code of the agent specification cannot be read.
3. Data and code of the agent specification cannot be tempered.
4. Attacks after the protection interval are possible, but these attacks do not have effects.
There are some limitations of black box security. There is a possibility for host to deny the execution and may return false result to mobile agent. This technique is complex and costly in the terms of execution and transmission speed.
**I. Environmental Key Generation**
Mobile agents are prepared for executing on various hosts with different environmental security conditions. The aim of this technique is improve the security of mobile agents and allow their execution on various environmental security conditions. Environmental key generation is a preventive security technique of mobile agent. Thus in this technique we
proposed an adaptive trust mechanism. It is basically based on the dynamic interaction between mobile agent and environment. By using the environmental key information collects in dynamic environment by agents from various hosts and vice-versa. This key informs to the host about trust degree and allows the mobile agent to for its execution. Trust estimation based on various parameters values.
As we already know the protection of mobile agents from malicious host is a challenging research area. Several approaches already have been already discussed like tamper proof hardware, function finding and black-box but these approaches have already some limitation and some approaches are costly. Therefore, we described here an approach which is neither limited nor more costly and also has an acceptable level of security name of this approach is Environmental key generation [15].
**Principles of this approach** Environmental key generation approach is based on a protocol (mobile agent code protection protocol) [16] and a control technique to improve trust and increase security. So here discuss the mobile agent code protection protocol and then discuss the properties of a secure environment. Now consider some assumptions:
1. Firstly the customer and service provider build a contract.
2. Some confidential information is known by the service provider which concerns the customer (e.g. contract reference).
3. The host always knows something about the environment and about itself that is not known by agent.
4. The information that is known by customer has an incidence on the agent's owner decision on
5. Execution on the requested (i.e. execute or not the requested service).
6. Mobile agent always has to be calculating the environmental key and does not consider that private information of host is correct or not.
**Mobile agent code protection protocol** The aim of this protocol is to secure code of a mobile agent against malicious hosts. The environment key does the center of our task since it is the trust degree of the target host. When the customer requires a service, it sends request to all the service providers. These providers send their acknowledgements with their proposals. Now the customer analyze all proposals and then select the best proposal and inform the respective service provider which generates public and private keys, and assigns particular key and the sufficient abstract expression to every behavior of the mobile agent [15] Then mobile agent moves to the customer host. The main steps of this protocol are mention in the Fig.4.
- Agent starts interacting in the environment in order to obtain the required information to generate the environmental key.
- With the public key of service provider customer encrypts the environment key and sends it to the provider.
- The service provider understands the received key. It identifies the key, selects the corresponding abstract expression, encrypts this expression with the environment key and sends it to the costumer.
- With the environment key customer tries to decipher the abstract expression. If it success then executes the requested service.
A secure environment An environment is secure or not for this test observation required and this observation is also use to enhance security and establish trust. If mobile agent does not establish trust, it uses observation to prevent itself from host or at least detect its misbehavior. Any host if it knows that agent is observing it then it tries to be more reliable [18]. According to Josang et al. trust is to which one party is willing to depend on somebody, or something, in a given situation with a feeling of relative security, even though negative consequences are possible [17].
Several parameters and malicious behaviors relies trust of host. We need several questions to define trust these questions are:
- For emitting the right opinion how can the agent perceive its reception environment?
- How can various observations aggregated to generate the environment key?
- How can this environment key exactly define the category of customers?
- How can this key inform about origin of failure in case of misbehavior?
**Key generation** Now we describe here cryptographic methods used to generate the environment key. Let us consider a set $E$ of $n$ abstract expressions as $E = \{E_1, E_2, E_3, E_4, ... E_n\}$ these expressions used to implement the different behaviors of mobile agent and let a set $A = \{A_1, A_2, A_3, A_4, ... A_p\}$ is the set of $p$ adaptable modules (include dummy) that is include in different implementations[5]. Each $E_i (i \leq n)$ is a sequence call of $A$’s subsets and also can be viewed as sequence of bits (each bit indicates the specific modules). If there are $P$ modules then total possible
combinations is $2^p-2$. Each combination is associated with an abstract expression (without concluding the empty expression). The environment key $K_j$ is generated for each expression $E_j$. The key definition is used to collect information at the level of host with the identifier of mobile agent which is unique. It is based on hash function and public key cryptography. Let us consider the couples of public and secret keys.
Host keys (Ph, Sh).
Agent owner keys ($P_0, S_0$).
As soon as the mobile agent riches at the customer host, it executes some actions which are using in trust acquisition explain in algorithm 1 given below [15].
### J. Algorithm 1 The Mobile Agent Behavior
1. Gather data correspond to parameters values let us consider \{d_1, d_2, d_3, d_4, ..., d_k\} is the set of collected data.
2. Apply secure hash one way function (SHS) to each data. For (i = 1 to k) do $M_i = H(d_i)$ end for.
3. Concentrate on all digits and find $M = \{M_1, M_2, M_3, ..., M_k\}$.
4. Encrypt $M$ with agent’s owner public key $P_0 (M)$.
5. Send the signed message $SM=S_0 (P_0 (M))$ to service provider.
6. In the 3rd step result apply hashing. Let us consider $D = H(M)$ is the final digest.
7. Apply $D \oplus id$ (here id is the unique identifier of mobile agent) to generate the environment key $k_j$ which will be used to decrypt the $E_j$ (abstract expression).
8. Receive an abstract expression.
9. Now decrypt the received abstract expression with the help of $k_j$.
10. If decryption done successfully then executes the selected services.
Service provider also executes some actions in order to obtain customer trust degree, explain in algorithm 2.
### K. Algorithm 2 The Service Provider Action
1. Received signed message (SM) = $S_0 (P_0 (M))$.
2. Calculate $P_0 (M) = P_0 (S_0 (P_0 (M)))$.
3. Calculate $M = S_0 (P_0 (M))$.
4. Obtain k digest $H(d_1), H(d_2), H(d_3), ..., H(d_k)$.
5. Check the obtained digest with the digests present in database.
6. By calculating the value of $T$ estimate trust worthiness [8].
7. Compare the trust values with the interval values and select the actions to be undertaken.
8. According to selected service select abstract expression; Let $E_j$ is the selected abstract expression.
9. Apply hashing to the result of 3rd step let $D = H(H(d_1), H(d_2), ..., H(d_k))$ be the final digest.
10. Apply $D \oplus id$ (where id is the unique identifier of mobile agent) to generate key $k_j$.
11. With the help of $k_j$ encrypt the selected abstract expression $E_j$.
12. Sign on encrypted abstract expression and send out to the customer.
We calculating environment key customer side as well as service provider side in order to protect it and avoid transmitting it.
### L. Computing with Encrypted Function
This is a preventive technique to secure mobile agent from malicious host. The main goal of this technique is to determine a method by this code of a mobile agent can safely compute cryptographic primitives. This approach is based on three basic techniques:
1. Homomorphism encryption scheme (HES).
2. Three address code.
3. Function composition (FnC).
#### F.1 Three address code
Today’s many high level languages like C, Java contains compiler to convert the source code into target code [19]. Compilers use several phases like lexical analysis, syntax analysis and semantic analysis etc. Most of the compilers generate intermediate code before generating the target code. Three address code is also one form of intermediate representation. Let us take an example the source code contains expression like $a + b * c$ then it may be translated in the following sequence:
$p1 = b * c$;
$p2 = a + p1$;
Where $p1$ and $p2$ are compilers generated temporary variables. Three address code contains three addresses two for operands and one for operators.
#### F.2 Homomorphic encryption scheme
Many researchers find the limitations of encryption system that is when data is decrypted then it is no more secure. Thus researchers develop a new technique of cryptography where authorize user is enable to compute encrypted data without decryption which is called privacy homomorphism [20]. After this, Sander and Tschudin describes additive-multiplicative homomorphism, which is a type of privacy homomorphism [21] [22]. Additive-Multiplicative homomorphism technique ensures that the computation result of two unencrypted values is same as the computation result of two encrypted values. Now we are going to describe the properties of additive-multiplicative homomorphism on the basis of Sander and Tschudin work. Let us consider there are two rings $G, H$ and an encryption function $E$ as $E: G -> H$.
- **Additive Homomorphic** If there is an efficient algorithm PLUS to compute $E(x+y)$ from $E(x)$ and $E(y)$ that does not reveal $x$ and $y$.
- **Multiplicative Homomorphic** If there is an efficient algorithm MULT to compute $E(xy)$ from $E(x)$ and $E(y)$ that does not reveal $x$ and $y$.
• Mixed Multiplicative Homomorphic
If there is an efficient algorithm MIXED-MULT to compute $E(x*y)$ from $E(x)$ and $y$ that does not reveal $x$.
Homomorphic schemes described above allow only two types of operations that is addition and multiplication. One important thing is there that is one to many relationship is there that is there may be generate multiple cipher text messages $E(x)$ from a single plaintext message $x$ i.e.
It is possible for two cipher text messages $E_1(x) \neq E_2(x)$ but plain text after decryption always $D(E_1(x)) = D(E_2(x))$. Another important point is that only few elements (only one element is desirable) that satisfies the last Mixed Multiplicative homomorphic property otherwise last and second property will produce an anomaly $y = E(y)$. Thus only one integer that is $1$ (multiplicative identity) should satisfy the last property $E(x*y) = E(x)*y$.
F.3 Function Composition
Sander and Tschudin argue that computing with encrypted functions cannot be accomplished by only additive-multiplicative homomorphism, but also by mathematical analogues like composite functions [23].
Lets consider a scenario: Alice wants to evaluate a linear map $A$ by using Bob’s input $x$ in Bob’s computer. Alice does not want to show linear map $A$ to Bob, so Alice picks at random matrix $S$ (invertible), and then computes $B := SA$ and sends $B$ to Bob. After receiving $B$, Bob computes $y = Bx$ and sends $y$ return to Alice. After receiving $y$, Alice computes $S^{-1}y$ and get the result $Ax$ without having revealed $A$ to Bob.
Here define the $f(x)$ as a composite function is represented by $f(x) = g \circ h$ or $f(x) = g(h(x))$, it is derived by using the output of a function, $h(x)$, and apply as the input to another function, $g(x)$.
In the Fig. 5, Alice is the agent owner and has an $h(x)$ function, that Alice wants to evaluate in Bob’s computer with Bob’s input $x$, but Alice does not want to show anything about function. Alice chooses $g(x)$ an invertible function, and creates $f(x)$ a composite function and sends it to Bob. Bob does all the computation with input $x$, and sends the result return to Alice. Bob is not capable to determine the function $h(x)$ (i.e. Owner's function), because what Bob can see this is only the $f(x)$ (composite function). Only Alice can extract the exact result of $h(x)$ from the result of $f(x)$ by using $f(x)$ into the inverse function of $g(x)$ (that is $h(x) = g^{-1}(f(x))$).
V. COMPARATIVE STUDY OF DETECTION TECHNIQUES
Only three detection techniques for securing the mobile agents these techniques are: Partial result encapsulation, Mutual itinerary recording and Execution tracing. Every technique has some limitations and no technique is there to fulfill all objectives. Table 2 summarizes the study of these three mechanisms.
Comparative study of preventive techniques of three preventive techniques are: Environmental key generation, Time-limited black box and Computing with encrypted functions. Table 3 summarizes the comparative study of preventive technique.
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Prevent Unauthorized Access of Information</th>
<th>Prevent Masquerading</th>
<th>Prevent Denial of Service</th>
<th>Prevent Emasquerading</th>
<th>Prevent Copy and Reply</th>
<th>Detect Tampering</th>
<th>Implementation Available</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mutual Itinerary Recording</td>
<td>NO</td>
<td>NO</td>
<td>NO</td>
<td>NO</td>
<td>YES</td>
<td>YES</td>
<td></td>
</tr>
<tr>
<td>Partial Result Encapsulation</td>
<td>PARTIAL</td>
<td>YES (digital signatures)</td>
<td>YES</td>
<td>YES (digital encryption)</td>
<td>YES (digital signatures)</td>
<td>PARTIAL</td>
<td>YES</td>
</tr>
<tr>
<td>Execution tracing</td>
<td>NO</td>
<td>YES</td>
<td>NO</td>
<td>NO</td>
<td>NO</td>
<td>YES (time limited)</td>
<td>PARTIAL</td>
</tr>
</tbody>
</table>
TABLE 2
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Prevent Unauthorized Access of Information</th>
<th>Prevent Masquerading</th>
<th>Prevent Denial of Service</th>
<th>Prevent Emasquerading</th>
<th>Prevent Copy and Reply</th>
<th>Detect Tampering</th>
<th>Implementation Available</th>
</tr>
</thead>
<tbody>
<tr>
<td>Environmental key generation</td>
<td>PARTIAL</td>
<td>YES</td>
<td>NO</td>
<td>YES</td>
<td>NO</td>
<td>PARTIAL</td>
<td>NO</td>
</tr>
<tr>
<td>Time limited black box</td>
<td>YES (time limited)</td>
<td>NO</td>
<td>NO</td>
<td>NO</td>
<td>NO</td>
<td>YES (time limited)</td>
<td>YES</td>
</tr>
<tr>
<td>Encrypted functions</td>
<td>PARTIAL</td>
<td>NO</td>
<td>NO</td>
<td>YES</td>
<td>NO</td>
<td>PARTIAL</td>
<td>NO</td>
</tr>
</tbody>
</table>
TABLE 3
Partial Result Encapsulation PRAC is oriented towards integrity and not confidentiality, the collected set of partial results can be observed by any platform visited.
Executing Tracking A drawback of this approach consists in the size of all the created logs by the hosts (each host may execute a large number of mobile agents). Another drawback consists in the difficulty of the logs management. The detection process is only triggered occasionally, based on suspicious results or other factors. Also, the size of the logs could get unmanageable.
Mutual Itinerary Recording This is a technique to observe the path history of mobile agent in relation how it visits the platform. This technique is very limited in practical realization due to problems of expectation that the programmer to know all the nodes that the agent will visit. It is a costly operation to maintain the authenticated communication channel.
Time Limited Black Box Security This approach tries to generate a ‘black-box’ out of agent code by using code obfuscating technique. Code obfuscation technique used to provide code confidentiality. Code obfuscation has capability to scramble the agent’s program and make it difficult to manipulate and understand.
Since an attacker needs time to examine the black-box code before it can attack the code, the agent is protected for a certain interval. After the ‘expiration interval’, the agent and the data it transports become invalid. If the agent is successfully converted into a black box, then the hosts cannot in any direct way interfere with its execution. Thus, a whole lot of threats such as eavesdropping, alteration of state etc. can be solved. But there is a possibility for host to reject the execution and may return false result to agent. This technique is complex and costly in the terms of execution and transmission speed.
Computing With Encrypted Function This approach enhance the security by avoiding the decryption of encrypted data to generate a new encrypted value from the new data and previously encrypted value. So the original data can be transmit, in encrypted form, to hosts that can perform the obligatory computation, while conserving the privacy of not only the encrypted data, as in privacy homomorphism, but the privacy of the keys as well. But finding suitable encryption schemes that can transform arbitrary functions is a challenge. This scheme doesn’t prevent denial of service, replay, and experimental extraction.
Environmental Key Generation The environmental key generation can protect the code and data from integrity and privacy attacks, but this approach also has weaknesses. First, this approach is vulnerable to group conspiracy attack. Second, data channel protection is another security issue. Third, although this approach can improve the integrity and the privacy for its code and data, it does not provide any protection for results. Fourth, once the code and data are decrypted, they can be attacked by a malicious host who can insert his or her own decrypting routine and data channel for new hosts. Clueless agents are proposed as solution to prevent code and data disclosure [8].
### Table 4
<table>
<thead>
<tr>
<th>Proposed Mechanism</th>
<th>Time limited black box</th>
<th>Computing with Encrypted functions</th>
<th>Environmental Key generation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Method used for hiding code</td>
<td>Obfuscation</td>
<td>Homomorphic Encryption</td>
<td>Keys are generated from one-way hash functions</td>
</tr>
<tr>
<td>Limited by time interval</td>
<td>YES</td>
<td>NO</td>
<td>NO</td>
</tr>
<tr>
<td>Code is executable without decrypting</td>
<td>YES</td>
<td>YES</td>
<td>NO</td>
</tr>
<tr>
<td>Entire program hidden</td>
<td>YES</td>
<td>YES</td>
<td>NO</td>
</tr>
<tr>
<td>Technique is mathematically provable</td>
<td>NO</td>
<td>YES</td>
<td>YES</td>
</tr>
</tbody>
</table>
### REFERENCES
|
{"Source-Url": "http://www.researchgate.net/profile/Dr_Kunwar_Vaisla2/publication/260493607_A_Comparative_Study_of_Security_Techniques_for_Protecting_Mobile_Agents_from_Malicious_Hosts/links/0c960531708df61fc5000000.pdf", "len_cl100k_base": 9892, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33126, "total-output-tokens": 11289, "length": "2e13", "weborganizer": {"__label__adult": 0.00048160552978515625, "__label__art_design": 0.00045108795166015625, "__label__crime_law": 0.0022678375244140625, "__label__education_jobs": 0.0012102127075195312, "__label__entertainment": 0.00010770559310913086, "__label__fashion_beauty": 0.00021398067474365232, "__label__finance_business": 0.0005793571472167969, "__label__food_dining": 0.0003995895385742187, "__label__games": 0.0015163421630859375, "__label__hardware": 0.002716064453125, "__label__health": 0.0008540153503417969, "__label__history": 0.00037789344787597656, "__label__home_hobbies": 0.00019741058349609375, "__label__industrial": 0.0007090568542480469, "__label__literature": 0.0003650188446044922, "__label__politics": 0.0004222393035888672, "__label__religion": 0.0004346370697021485, "__label__science_tech": 0.2276611328125, "__label__social_life": 0.00012421607971191406, "__label__software": 0.0196685791015625, "__label__software_dev": 0.73779296875, "__label__sports_fitness": 0.0003752708435058594, "__label__transportation": 0.0006651878356933594, "__label__travel": 0.00019502639770507812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47380, 0.01584]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47380, 0.49044]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47380, 0.88542]], "google_gemma-3-12b-it_contains_pii": [[0, 4959, false], [4959, 8856, null], [8856, 14425, null], [14425, 20465, null], [20465, 25143, null], [25143, 29933, null], [29933, 34898, null], [34898, 39343, null], [39343, 45718, null], [45718, 47380, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4959, true], [4959, 8856, null], [8856, 14425, null], [14425, 20465, null], [20465, 25143, null], [25143, 29933, null], [29933, 34898, null], [34898, 39343, null], [39343, 45718, null], [45718, 47380, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47380, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47380, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47380, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47380, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47380, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47380, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47380, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47380, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47380, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47380, null]], "pdf_page_numbers": [[0, 4959, 1], [4959, 8856, 2], [8856, 14425, 3], [14425, 20465, 4], [20465, 25143, 5], [25143, 29933, 6], [29933, 34898, 7], [34898, 39343, 8], [39343, 45718, 9], [45718, 47380, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47380, 0.09259]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
10f1389124eac5c417e202a4e08afcf8523ed4c7
|
Implementing Remote Procedure Calls
ANDREW D. BIRRELL and BRUCE JAY NELSON
Xerox Palo Alto Research Center
Remote procedure calls (RPC) appear to be a useful paradigm for providing communication across a network between programs written in a high-level language. This paper describes a package providing a remote procedure call facility, the options that face the designer of such a package, and the decisions we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptions of some optimizations used to achieve high performance and to minimize the load on server machines that have many clients.
CR Categories and Subject Descriptors: C.2.2 [Computer-Communication Networks]: Network Protocols—protocol architecture; C.2.4 [Computer-Communication Networks]: Distributed Systems—distributed applications, network operating systems; D.4.4 [Operating Systems]: Communications Management—message sending, network communication; D.4.7 [Operating Systems]: Organization and Design—distributed systems
General Terms: Design, Experimentation, Performance, Security
Additional Keywords and Phrases: Remote procedure calls, transport layer protocols, distributed naming and binding, inter-process communication, performance of communication protocols.
1. INTRODUCTION
1.1 Background
The idea of remote procedure calls (hereinafter called RPC) is quite simple. It is based on the observation that procedure calls are a well-known and well-understood mechanism for transfer of control and data within a program running on a single computer. Therefore, it is proposed that this same mechanism be extended to provide for transfer of control and data across a communication network. When a remote procedure is invoked, the calling environment is suspended, the parameters are passed across the network to the environment where the procedure is to execute (which we will refer to as the callee), and the desired procedure is executed there. When the procedure finishes and produces its results, the results are passed backed to the calling environment, where execution resumes as if returning from a simple single-machine call. While the calling environment is suspended, other processes on that machine may (possibly)
still execute (depending on the details of the parallelism of that environment and the RPC implementation).
There are many attractive aspects to this idea. One is clean and simple semantics: these should make it easier to build distributed computations, and to get them right. Another is efficiency: procedure calls seem simple enough for the communication to be quite rapid. A third is generality: in single-machine computations, procedures are often the most important mechanism for communication between parts of the algorithm.
The idea of RPC has been around for many years. It has been discussed in the public literature many times since at least as far back as 1976 [15]. Nelson's doctoral dissertation [13] is an extensive examination of the design possibilities for an RPC system and has references to much of the previous work on RPC. However, full-scale implementations of RPC have been rarer than paper designs. Notable recent efforts include Courier in the Xerox NS family of protocols [4], and current work at MIT [10].
This paper results from the construction of an RPC facility for the Cedar project. We felt, because of earlier work (particularly Nelson's thesis and associated experiments), that we understood the choices the designer of an RPC facility must make. Our task was to make the choices in light of our particular aims and environment. In practice, we found that several areas were inadequately understood, and we produced a system whose design has several novel aspects. Major issues facing the designer of an RPC facility include: the precise semantics of a call in the presence of machine and communication failures; the semantics of address-containing arguments in the (possible) absence of a shared address space; integration of remote calls into existing (or future) programming systems; binding (how a caller determines the location and identity of the callee); suitable protocols for transfer of data and control between caller and callee; and how to provide data integrity and security (if desired) in an open communication network. In building our RPC package we addressed each of these issues, but it not possible to describe all of them in suitable depth in a single paper. This paper includes a discussion of the issues and our major decisions about them, and describes the overall structure of our solution. We also describe in some detail our binding mechanism and our transport level communication protocol. We plan to produce subsequent papers describing our facilities for encryption-based security, and providing more information about the manufacture of the stub modules (which are responsible for the interpretation of arguments and results of RPC calls) and our experiences with practical use of this facility.
1.2 Environment
The remote-procedure-call package we have built was developed primarily for use within the Cedar programming environment, communicating across the Xerox research internetwork. In building such a package, some characteristics of the environment inevitably have an impact on the design, so the environment is summarized here.
Cedar [6] is a large project concerned with developing a programming environment that is powerful and convenient for the building of experimental programs and systems. There is an emphasis on uniform, highly interactive user interfaces, and ease of construction and debugging of programs. Cedar is designed to be used
on single-user workstations, although it is also used for the construction of servers (shared computers providing common services, accessible through the communication network).
Most of the computers used for Cedar are Dorados [8]. The Dorado is a very powerful machine (e.g., a simple Algol-style call and return takes less than 10 microseconds). It is equipped with a 24-bit virtual address space (of 16-bit words) and an 80-megabyte disk. Think of a Dorado as having the power of an IBM 370/168 processor, dedicated to a single user.
Communication between these computers is typically by means of a 3-megabit-per-second Ethernet [11]. (Some computers are on a 10-megabit-per-second Ethernet [7].) Most of the computers running Cedar are on the same Ethernet, but some are on different Ethernets elsewhere in our research internetwork. The internetwork consists of a large number of 3-megabyte and 10-megabyte Ethernets (presently about 160) connected by leased telephone and satellite links (at data rates of between 4800 and 56000 bps). We envisage that our RPC communication will follow the pattern we have experienced with other protocols: most communication is on the local Ethernet (so the much lower data rates of the internet links are not an inconvenience to our users), and the Ethernets are not overloaded (we very rarely see offered loads above 40 percent of the capacity of an Ethernet, and 10 percent is typical).
The PUP family of protocols [3] provides uniform access to any computer on this internetwork. Previous PUP protocols include simple unreliable (but high-probability) datagram service, and reliable flow-controlled byte streams. Between two computers on the same Ethernet, the lower level raw Ethernet packet format is available.
Essentially all programming is in high-level languages. The dominant language is Mesa [12] (as modified for the purposes of Cedar), although Smalltalk and InterLisp are also used. There is no assembly language for Dorados.
1.3 Aims
The primary purpose of our RPC project was to make distributed computation easy. Previously, it was observed within our research community that the construction of communicating programs was a difficult task, undertaken only by members of a select group of communication experts. Even researchers with substantial systems experience found it difficult to acquire the specialized expertise required to build distributed systems with existing tools. This seemed undesirable. We have available to us a very large, very powerful communication network, numerous powerful computers, and an environment that makes building programs relatively easy. The existing communication mechanisms appeared to be a major factor constraining further development of distributed computing. Our hope is that by providing communication with almost as much ease as local procedure calls, people will be encouraged to build and experiment with distributed applications. RPC will, we hope, remove unnecessary difficulties, leaving only the fundamental difficulties of building distributed systems: timing, independent failure of components, and the coexistence of independent execution environments.
We had two secondary aims that we hoped would support our purpose. We wanted to make RPC communication highly efficient (within, say, a factor of
five beyond the necessary transmission times of the network). This seems important, lest communication become so expensive that application designers strenuously avoid it. The applications that might otherwise get developed would be distorted by their desire to avoid communicating. Additionally, we felt that it was important to make the semantics of the RPC package as powerful as possible, without loss of simplicity or efficiency. Otherwise, the gains of a single unified communication paradigm would be lost by requiring application programmers to build extra mechanisms on top of the RPC package. An important issue in design is resolving the tension between powerful semantics and efficiency.
Our final major aim was to provide secure communication with RPC. None of the previously implemented protocols had any provision for protecting the data in transit on our networks. This was true even to the extent that passwords were transmitted as clear-text. Our belief was that research on the protocols and mechanisms for secure communication across an open network had reached a stage where it was reasonable and desirable for us to include this protection in our package. In addition, very few (if any) distributed systems had previously provided secure end-to-end communication, and it had never been applied to RPC, so the design might provide useful research insights.
1.4 Fundamental Decisions
It is not an immediate consequence of our aims that we should use procedure calls as the paradigm for expressing control and data transfers. For example, message passing might be a plausible alternative. It is our belief that a choice between these alternatives would not make a major difference in the problems faced by this design, nor in the solutions adopted. The problems of reliable and efficient transmission of a message and of its possible reply are quite similar to the problems encountered for remote procedure calls. The problems of passing arguments and results, and of network security, are essentially unchanged. The overriding consideration that made us choose procedure calls was that they were the major control and data transfer mechanism imbedded in our major language, Mesa.
One might also consider using a more parallel paradigm for our communication, such as some form of remote fork. Since our language already includes a construct for forking parallel computations, we could have chosen this as the point at which to add communication semantics. Again, this would not have changed the major design problems significantly.
We discarded the possibility of emulating some form of shared address space among the computers. Previous work has shown that with sufficient care moderate efficiency can be achieved in doing this [14]. We do not know whether an approach employing shared addresses is feasible, but two potentially major difficulties spring to mind: first, whether the representation of remote addresses can be integrated into our programming languages (and possibly the underlying machine architecture) without undue upheaval; second, whether acceptable efficiency can be achieved. For example, a host in the PUP internet is represented by a 16-bit address, so a naive implementation of a shared address space would extend the width of language addresses by 16-bits. On the other hand, it is possible that careful use of the address-mapping mechanisms of our virtual memory hardware could allow shared address space without changing the address.
width. Even on our 10 megabit Ethernets, the minimum average round trip time for a packet exchange is 120 microseconds [7], so the most likely way to approach this would be to use some form of paging system. In summary, a shared address space between participants in RPC might be feasible, but since we were not willing to undertake that research our subsequent design assumes the absence of shared addresses. Our intuition is that with our hardware the cost of a shared address space would exceed the additional benefits.
A principle that we used several times in making design choices is that the semantics of remote procedure calls should be as close as possible to those of local (single-machine) procedure calls. This principle seems attractive as a way of ensuring that the RPC facility is easy to use, particularly for programmers familiar with single-machine use of our languages and packages. Violation of this principle seemed likely to lead us into the complexities that have made previous communication packages and protocols difficult to use. This principle has occasionally caused us to deviate from designs that would seem attractive to those more experienced in distributed computing. For example, we chose to have no time-out mechanism limiting the duration of a remote call (in the absence of machine or communication failures), whereas most communication packages consider this a worthwhile feature. Our argument is that local procedure calls have no time-out mechanism, and our languages include mechanisms to abort an activity as part of the parallel processing mechanism. Designing a new time-out arrangement just for RPC would needlessly complicate the programmer's world. Similarly, we chose the building semantics described below (based closely on the existing Cedar mechanisms) in preference to the ones presented in Nelson's thesis [13].
1.5 Structure
The program structure we use for RPC is similar to that proposed in Nelson's thesis. It is based on the concept of stubs. When making a remote call, five pieces of program are involved: the user, the user-stub, the RPC communications package (known as RPCRuntime), the server-stub, and the server. Their relationship is shown in Figure 1. The user, the user-stub, and one instance of RPCRuntime execute in the caller machine; the server, the server-stub and another instance of RPCRuntime execute in the callee machine. When the user wishes to make a remote call, it actually makes a perfectly normal local call which invokes a corresponding procedure in the user-stub. The user-stub is responsible for placing a specification of the target procedure and the arguments into one or more packets and asking the RPCRuntime to transmit these reliably to the callee machine. On receipt of these packets, the RPCRuntime in the callee machine passes them to the server-stub. The server-stub unpacks them and again makes a perfectly normal local call which invokes a corresponding procedure in the server. The user-stub is responsible for placing a specification of the target procedure and the arguments into one or more packets and asking the RPCRuntime to transmit these reliably to the callee machine. On receipt of these packets, the RPCRuntime in the callee machine passes them to the server-stub. The server-stub unpacks them and again makes a perfectly normal local call, which invokes the appropriate procedure in the server. Meanwhile, the calling process in the caller machine is suspended awaiting a result packet. When the call in the server completes, it returns to the server-stub and the results are passed back to the suspended process in the caller machine. There they are unpacked and the user-stub returns them to the user. RPCRuntime is responsible for retransmissions, acknowledgments, packet routing, and encryption. Apart from the effects of multimachine binding and of machine or communication failures, the call happens just as if the user had
invoked the procedure in the server directly. Indeed, if the user and server code were brought into a single machine and bound directly together without the stubs, the program would still work.
RPCRuntime is a standard part of the Cedar system. The user and server are written as part of the distributed application. But the user-stub and server-stub are automatically generated, by a program called Lupine. This generation is specified by use of Mesa interface modules. These are the basis of the Mesa (and Cedar) separate compilation and binding mechanism [9]. An interface module is mainly a list of procedure names, together with the types of their arguments and results. This is sufficient information for the caller and callee to independently perform compile-time type checking and to generate appropriate calling sequences. A program module that implements procedures in an interface is said to export that interface. A program module calling procedures from an interface is said to import that interface. When writing a distributed application, a programmer first writes an interface module. Then he can write the user code that imports that interface and the server code that exports the interface. He also presents the interface to Lupine, which generates the user-stub, (that exports the interface) and the server-stub (that imports the interface). When binding the programs on the caller machine, the user is bound to the user-stub. On the callee machine, the server-stub is bound to the server.
Thus, the programmer does not need to build detailed communication-related code. After designing the interface, he need only write the user and server code. Lupine is responsible for generating the code for packing and unpacking arguments and results (and other details of parameter/result semantics), and for dispatching to the correct procedure for an incoming call in the server-stub. RPCRuntime is responsible for packet-level communications. The programmer must avoid specifying arguments or results that are incompatible with the lack of shared address space. (Lupine checks this avoidance.) The programmer must also take steps to invoke the intermachine binding described in Section 2, and to handle reported machine or communication failures.
2. BINDING
There are two aspects to binding which we consider in turn. First, how does a client of the binding mechanism specify what he wants to be bound to? Second,
how does a caller determine the machine address of the callee and specify to the callee the procedure to be invoked? The first is primarily a question of naming and the second a question of location.
2.1 Naming
The binding operation offered by our RPC package is to bind an importer of an interface to an exporter of an interface. After binding, calls made by the importer invoke procedures implemented by the (remote) exporter. There are two parts to the name of an interface: the type and the instance. The type is intended to specify, at some level of abstraction, which interface the caller expects the callee to implement. The instance is intended to specify which particular implementor of an abstract interface is desired. For example, the type of an interface might correspond to the abstraction of “mail server,” and the instance would correspond to some particular mail server selected from many. A reasonable default for the type of an interface might be a name derived from the name of the Mesa interface module. Fundamentally, the semantics of an interface name are not dictated by the RPC package—they are an agreement between the exporter and the importer, not fully enforceable by the RPC package. However, the means by which an exporter uses the interface name to locate an exporter are dictated by the RPC package, and these we now describe.
2.2 Locating an Appropriate Exporter
We use the Grapevine distributed database [1] for our RPC binding. The major attraction of using Grapevine is that it is widely and reliably available. Grapevine is distributed across multiple servers strategically located in our internet topology, and is configured to maintain at least three copies of each database entry. Since the Grapevine servers themselves are highly reliable and the data is replicated, it is extremely rare for us to be unable to look up a database entry. There are alternatives to using such a database, but we find them unsatisfactory. For example, we could include in our application programs the network addresses of the machine with which they wish to communicate: this would bind to a particular machine much too early for most applications. Alternatively, we could use some form of broadcast protocol to locate the desired machine: this would sometimes be acceptable, but as a general mechanism would cause too much interference with innocent bystanders, and would not be convenient for binding to machines not on the same local network.
Grapevine’s database consists of a set of entries, each keyed by a character string known as a Grapevine RName. There are two varieties of entries: individuals and groups. Grapevine keeps several items of information for each database entry, but the RPC package is concerned with only two: for each individual there is a connect-site, which is a network address, and for each group there is a member-list, which is a list of RNames. The RPC package maintains two entries in the Grapevine database for each interface name: one for each type and one for each instance; so the type and instance are both Grapevine RNames. The database entry for the instance is a Grapevine individual whose connect-site is a network address, specifically, the network address of the machine on which that instance was last exported. The database entry for the type is a Grapevine group whose members are the Grapevine RNames of the instances of that type which
have been exported. For example, if the remote interface with type FileAccess.Alpine and instance Ebbets.Alpine has been exported by a server running at network address 3#22#, and the remote interface with type FileAccess.Alpine and instance Luther.Alpine has been exported by a server running at network address 3#276#, then the members of the Grapevine group FileAccess.Alpine would include Ebbets.Alpine and Luther.Alpine. The Grapevine individual Ebbets.Alpine would have 3#22# as its connect-site and Luther.Alpine would have 3#276#.
When an exporter wishes to make his interface available to remote clients, the server code calls the server-stub which in turn calls a procedure, ExportInterface, in the RPCRuntime. ExportInterface is given the interface name (type and instance) together with a procedure (known as the dispatcher) implemented in the server-stub which will handle incoming calls for the interface. ExportInterface calls Grapevine and ensures that the instance is one of the members of the Grapevine group which is the type, and that the connect-site of (the Grapevine individual which is) the instance is the network address of the exporting machine. This may involve updating the database. As an optimization, the database is not updated if it already contains the correct information—this is usually true: typically an interface of this name has previously been exported, and typically from the same network address. For example, to export the interface with type FileAccess.Alpine and instance Ebbets.Alpine from network address 3#22#, the RPCRuntime would ensure that Ebbets.Alpine in the Grapevine database has connect-site 3#22# and that Ebbets.Alpine is a member of FileAccess.Alpine. The RPCRuntime then records information about this export in a table maintained on the exporting machine. For each currently exported interface, this table contains the interface name, the dispatcher procedure from the server-stub, and a 32-bit value that serves as a permanently unique (machine-relative) identifier of the export. This table is implemented as an array indexed by a small integer. The identifier is guaranteed to be permanently unique by the use of successive values of a 32-bit counter; on start-up this counter is initialized to a one-second real time clock, and the counter is constrained subsequently to be less than the current value of that clock. This constrains the rate of calls on ExportInterface in a single machine to an average rate of less than one per second, averaged over the time since the exporting machine was restarted. The burst rate of such calls can exceed one per second (see Figure 2).
When an importer wishes to bind to an exporter, the user code calls its user-stub which in turn calls a procedure, ImportInterface, in the RPCRuntime, giving it the desired interface type and instance. The RPCRuntime determines the network address of the exporter (if there is one) by asking Grapevine for the network address which is the connect-site of the interface instance. The RPCRuntime then makes a remote procedure call to the RPCRuntime package on that machine asking for the binding information associated with this interface type and instance. If the specified machine is not currently exporting that interface this fact is returned to the importing machine and the binding fails. If the specified machine is currently exporting that interface, then the table of current exports maintained by its RPCRuntime yields the corresponding unique identifier; the identifier and the table index are returned to the importing machine.
and the binding succeeds. The exporter network address, identifier, and table index are remembered by the user-stub for use in remote calls.
Subsequently, when that user-stub is making a call on the imported remote interface, the call packet it manufactures contains the unique identifier and table index of the desired interface, and the entry point number of the desired procedure relative to the interface. When the RPCRuntime on the callee machine receives a new call packet it uses the index to look up its table of current exports (efficiently), verifies that the unique identifier in the packet matches that in the table, and passes the call packet to the dispatcher procedure specified in the table.
There are several variants of this binding scheme available to our clients. If the importer calling ImportInterface specifies only the interface type but no instance, the RPCRuntime obtains from Grapevine the members of the Grapevine group named by the type. The RPCRuntime then obtains the network address for each of those Grapevine individuals, and tries the addresses in turn to find some instance that will accept the binding request: this is done efficiently,
and in an order which tends to locate the closest (most responsive) running exporter. This allows an importer to become bound to the closest running instance of a replicated service, where the importer does not care which instance. Of course, an importer is free to enumerate the instances himself, by enumerating the members of the group named by the type.
The instance may be a network address constant instead of a Grapevine name. This would allow the importer to bind to the exporter without any interaction with Grapevine, at the cost of including an explicit address in the application programs.
2.3 Discussion
There are some important effects of this scheme. Notice that importing an interface has no effect on the data structures in the exporting machine; this is advantageous when building servers that may have hundreds of users, and avoids problems regarding what the server should do about this information in relation to subsequent importer crashes. Also, use of the unique identifier scheme means that bindings are implicitly broken if the exporter crashes and restarts (since the currency of the identifier is checked on each call). We believe that this implicit unbinding is the correct semantics: otherwise a user will not be notified of a crash happening between calls. Finally, note that this scheme allows calls to be made only on procedures that have been explicitly exported through the RPC mechanism. An alternate, slightly more efficient scheme would be to issue importers with the exporter's internal representation of the server-stub dispatcher procedure; this we considered undesirable since it would allow unchecked access to almost any procedure in the server machine and, therefore, would make it impossible to enforce any protection or security schemes.
The access controls that restrict updates to the Grapevine database have the effect of restricting the set of users who will be able to export particular interface names. These are the desired semantics: it should not be possible, for example, for a random user to claim that his workstation is a mail server and to thereby be able to intercept my message traffic. In the case of a replicated service, this access control effect is critical. A client of a replicated service may not know a priori the names of the instances of the service. If the client wishes to use two-way authentication to get the assurance that the service is genuine, and if we wish to avoid using a single password for identifying every instance of the service, then the client must be able to securely obtain the list of names of the instances of the service. We can achieve this security by employing a secure protocol when the client interacts with Grapevine as the interface is being imported. Thus Grapevine's access controls provide the client's assurance that an instance of the service is genuine (authorized).
We have allowed several choices for binding time. The most flexible is where the importer specifies only the type of the interface and not its instance: here the decision about the interface instance is made dynamically. Next (and most common) is where the interface instance is an RName, delaying the choice of a particular exporting machine. Most restrictive is the facility to specify a network address as an instance, thus binding it to a particular machine at compile time. We also provide facilities allowing an importer to dynamically instantiate interfaces and to import them. A detailed description of how this is done would be
too complicated for this paper, but in summary it allows an importer to bind his program to several exporting machines, even when the importer cannot know statically how many machines he wishes to bind to. This has proved to be useful in some open-ended multimachine algorithms, such as implementing the manager of a distributed atomic transaction. We have not allowed binding at a finer grain than an entire interface. This was not an option we considered, in light of inutility of this mechanism in the packages and systems we have observed.
3. PACKET-LEVEL TRANSPORT PROTOCOL
3.1 Requirements
The semantics of RPCs can be achieved without designing a specialized packet-level protocol. For example, we could have built our package using the PUP byte stream protocol (or the Xerox NS sequenced packet protocol) as our transport layer. Some of our previous experiments [13] were made using PUP byte streams, and the Xerox NS “Courier” RPC protocol [4] uses the NS sequenced packet protocol. Grapevine protocols are essentially similar to remote procedure calls, and use PUP byte streams. Our measurements [13] and experience with each of these implementations convinced us that this approach was unsatisfactory. The particular nature of RPC communication means that there are substantial performance gains available if one designs and implements a transport protocol specially for RPC. Our experiments indicated that a performance gain of a factor of ten might be possible.
An intermediate stance might be tenable: we have never tried the experiment of using an existing transport protocol and building an implementation of it specialized for RPC. However, the request-response nature of communication with RPC is sufficiently unlike the large data transfers for which bytes streams are usually employed that we do not believe this intermediate position to be tenable.
One aim we emphasized in our protocol design was minimizing the elapsed real-time between initiating a call and getting results. With protocols for bulk data transfer this is not important: most of the time is spent actually transferring the data. We also strove to minimize the load imposed on a server by substantial numbers of users. When performing bulk data transfers, it is acceptable to adopt schemes that lead to a large cost for setting up and taking down connections, and that require maintenance of substantial state information during a connection. These are acceptable because the costs are likely to be small relative to the data transfer itself. This, we believe, is untrue for RPC. We envisage our machines being able to serve substantial numbers of clients, and it would be unacceptable to require either a large amount of state information or expensive connection handshaking.
It is this level of the RPC package that defines the semantics and the guarantees we give for calls. We guarantee that if the call returns to the user then the procedure in the server has been invoked precisely once. Otherwise, an exception is reported to the user and the procedure will have been invoked either once or not at all—the user is not told which. If an exception is reported, the user does not know whether the server has crashed or whether there is a problem in the communication network. Provided the RPCRuntime on the server machine is
still responding, there is no upper bound on how long we will wait for results; that is, we will abort a call if there is a communication breakdown or a crash but not if the server code deadlocks or loops. This is identical to the semantics of local procedure calls.
3.2 Simple Calls
We have tried to make the per call communication particularly efficient for the situation where all of the arguments will fit in a single packet buffer, as will all of the results, and where frequent calls are being made. To make a call, the caller sends a *call packet* containing a call identifier (discussed below), data specifying the desired procedure (as described in connection with binding), and the arguments. When the callee machine receives this packet the appropriate procedure is invoked. When the procedure returns, a *result packet* containing the same call identifier, and the results, is sent back to the caller.
The machine that transmits a packet is responsible for retransmitting it until an acknowledgment is received, in order to compensate for lost packets. However, the result of a call is sufficient acknowledgment that the call packet was received, and a call packet is sufficient to acknowledge the result packet of the previous call made by that process. Thus in a situation where the duration of a call and the interval between calls are each less than the transmission interval, we transmit precisely two packets per call (one in each direction). If the call lasts longer or there is a longer interval between calls, up to two additional packets may be sent (the retransmission and an explicit acknowledgment packet); we believe this to be acceptable because in those situations it is clear that communication costs are no longer the limiting factor on performance.
The call identifier serves two purposes. It allows the caller to determine that the result packet is truly the result of his current call (not, for example, a much delayed result of some previous call), and it allows the callee to eliminate duplicate call packets (caused by retransmissions, for example). The call identifier consists of the calling machine identifier (which is permanent and globally unique), a machine-relative identifier of the calling process, and a sequence number. We term the pair [machine identifier, process] an *activity*. The important property of an activity is that each activity has at most one outstanding remote call at any time—it will not initiate a new call until it has received the results of the preceding call. The call sequence number must be monotonic for each activity (but not necessarily sequential). The RPCRuntime on a callee machine maintains a table giving the sequence number of the last call invoked by each calling activity. When a call packet is received, its call identifier is looked up in this table. The call packet can be discarded as a duplicate (possibly after acknowledgment) unless its sequence number is greater than that given in this table. Figure 3 shows the packets transmitted in simple calls.
It is interesting to compare this arrangement with connection establishment, maintenance and termination in more heavyweight transport protocols. In our protocol, we think of a *connection* as the shared state information between an activity on a calling machine and the RPCRuntime package on the server machine accepting calls from that activity. We require no special connection establishment
Implementing Remote Procedure Calls
Fig. 3. The packets transmitted during a simple call.
protocol (compared with the two-packet handshake required in many other protocols); receipt of a call packet from a previously unknown activity is sufficient to create the connection implicitly. When the connection is active (when there is a call being handled, or when the last result packet of the call has not yet been acknowledged), both ends maintain significant amounts of state information. However, when the connection is idle the only state information in the server machine is the entry in its table of sequence numbers. A caller has minimal state information when a connection is idle: a single machine-wide counter is sufficient. When initiating a new call, its sequence number is just the next value of this counter. This is why sequence numbers in the calls from an activity are required only to be monotonic, not sequential. When a connection is idle, no process in either machine is concerned with the connection. No communications (such as “pinging” packet exchanges) are required to maintain idle connections. We have no explicit connection termination protocol. If a connection is idle, the server machine may discard its state information after an interval, when there is no longer any danger of receiving retransmitted call packets (say, after five minutes), and it can do so without interacting with the caller machine. This scheme provides the guarantees of traditional connection-oriented protocols without the costs. Note, however, that we rely on the unique identifier we introduced when doing remote binding. Without this identifier we would be unable to detect duplicates if a server crashed and then restarted while a caller was still retransmitting a call packet (not very likely, but just plausible). We are also assuming that the call sequence number from an activity does not repeat even if the calling machine is restarted (otherwise a call from the restarted machine might be eliminated as a duplicate). In practice, we achieve this as a side effect of a 32-bit conversation identifier which we use in connection with secure calls. For non-secure calls, a conversation identifier may be thought of as a permanently unique identifier which distinguishes incarnations of a calling machine. The conversation identifier is passed with the call sequence number on every call. We generate conversation identifiers based on a 32-bit clock maintained by every machine (initialized from network time servers when a machine restarts).
From experience with previous systems, we anticipate that this light-weight connection management will be important in building large and busy distributed systems.
3.3 Complicated Calls
As mentioned above, the transmitter of a packet is responsible for retransmitting it until it is acknowledged. In doing so, the packet is modified to request an explicit acknowledgment. This handles lost packets, long duration calls, and long gaps between calls. When the caller is satisfied with its acknowledgments, the caller process waits for the result packet. While waiting, however, the caller periodically sends a probe packet to the callee, which the callee is expected to acknowledge. This allows the caller to notice if the callee has crashed or if there is some serious communication failure, and to notify the user of an exception. Provided these probes continue to be acknowledged the caller will wait indefinitely, happy in the knowledge that the callee is (or claims to be) working on the call. In our implementation the first of these probes is issued after a delay of slightly more than the approximate round-trip time between the machines. The interval between probes increases gradually, until, after about 10 minutes, the probes are being sent once every five minutes. Each probe is subject to retransmission strategies similar to those used for other packets of the call. So if there is a communication failure, the caller will be told about it fairly soon, relative to the total time the caller has been waiting for the result of the call. Note that this will only detect failures in the communication levels: it will not detect if the callee has deadlocked while working on the call. This is in keeping with our principle of making RPC semantics similar to local procedure call semantics. We have language facilities available for watching a process and aborting it if this seems appropriate; these facilities are just as suitable for a process waiting on a remote call.
A possible alternative strategy for retransmissions and acknowledgments is to have the recipient of a packet spontaneously generate an acknowledgment if he doesn't generate the next packet significantly sooner than the expected retransmission interval. This would save the retransmission of a packet when dealing with long duration calls or large gaps between calls. We decided that saving this packet was not a large enough gain to merit the extra cost of detecting that the spontaneous acknowledgment was needed. In our implementation this extra cost would be in the form of maintaining an additional data structure to enable an extra process in the server to generate the spontaneous acknowledgment, when appropriate, plus the computational cost of the extra process deciding when to generate the acknowledgment. In particular, it would be difficult to avoid incurring extra cost when the acknowledgment is not needed. There is no analogous extra cost to the caller, since the caller necessarily has a retransmission algorithm in case the call packet is lost.
If the arguments (or results) are too large to fit in a single packet, they are sent in multiple packets with each but the last requesting explicit acknowledgment. Thus when transmitting a large call argument packets are sent alternately by the caller and callee, with the caller sending data packets and the callee responding with acknowledgments. This allows the implementation to use only one packet buffer at each end for the call, and avoids the necessity of including the buffering and flow control strategies found in normal-bulk data transfer protocols. To permit duplicate elimination, these multiple data packets within a call each has a call-relative sequence number. Figure 4 shows the packet sequences for complicated calls.
As described in Section 3.1, this protocol concentrates on handling simple calls on local networks. If the call requires more than one packet for its arguments or results, our protocol sends more packets than are logically required. We believe this is acceptable; there is still a need for protocols designed for efficient transfer of bulk data, and we have not tried to incorporate both RPC and bulk data in a single protocol. For transferring a large amount of data in one direction, our protocol sends up to twice as many packets as a good bulk data protocol would send (since we acknowledge each packet). This would be particularly inappropriate across long haul networks with large delays and high data rates. However, if the communication activity can reasonably be represented as procedure calls, then our protocol has desirable characteristics even across such long haul networks. It is sometimes practical to use RPC for bulk data transfer across such networks, by multiplexing the data between several processes each of which is making single packet calls—the penalty then is just the extra acknowledgment per packet, and in some situations this is acceptable. The dominant advantage of requiring one acknowledgment for each argument packet (except the last one) is that it simplifies and optimizes the implementation. It would be possible to
use our protocol for simple calls, and to switch automatically to a more conventional protocol for complicated ones. We have not explored this possibility.
3.4 Exception Handling
The Mesa language provides quite elaborate facilities for a procedure to notify exceptions to its caller. These exceptions, called *signals*, may be thought of as dynamically bound procedure activations: when an exception is raised, the Mesa runtime system dynamically scans the call stack to determine if there is a *catch phrase* for the exception. If so, the body of the catch phrase is executed, with arguments given when the exception was raised. The catch phrase may return (with results) causing execution to resume where the exception was raised, or the catch phrase may terminate with a jump out into a lexically enclosing context. In the case of such termination, the dynamically newer procedure activations on the call stack are unwound (in most-recent-first order).
Our RPC package faithfully emulates this mechanism. There are facilities in the protocol to allow the process on the server machine handling a call to transmit an exception packet in place of a result packet. This packet is handled by the RPCRuntime on the caller machine approximately as if it were a call packet, but instead of invoking a new call it raises an exception in the appropriate process. If there is an appropriate catch phrase, it is executed. If the catch phrase returns, the results are passed back to the callee machine, and events proceed normally. If the catch phrase terminates by a jump then the callee machine is so notified, which then unwinds the appropriate procedure activations. Thus we have again emulated the semantics of local calls. This is not quite true: in fact we permit the callee machine to communicate only those exceptions which are defined in the Mesa interface which the callee exported. This simplifies our implementation (in translating the exception names from the callee's machine environment to the caller's), and provides some protection and debugging assistance. The programming convention in single machine programs is that if a package wants to communicate an exception to its caller then the exception should be defined in the package's interface; other exceptions should be handled by a debugger. We have maintained and enforced this convention for RPC exceptions.
In addition to exceptions raised by the callee, the RPCRuntime may raise a *call failed* exception if there is some communication difficulty. This is the primary way in which our clients note the difference between local and remote calls.
3.5 Use of Processes
In Mesa and Cedar, parallel processes are available as a built-in language feature. Process creation and changing the processor state on a process swap are considered inexpensive. For example, forking a new process costs about as much as ten (local) procedure calls. A process swap involves swapping an evaluation stack and one register, and invalidating some cached information. However, on the scale of a remote procedure call, process creation and process swaps can amount to a significant cost. This was shown by some of Nelson's experiments [13]. Therefore we took care to keep this cost low when building this package and designing our protocol.
The first step in reducing cost is maintaining in each machine a stock of idle server processes willing to handle incoming packets. This means that a call can be handled without incurring the cost of process creation, and without the cost of initializing some of the state of the server process. When a server process is entirely finished with a call, it reverts to its idle state instead of dying. Of course, excess idle server processes kill themselves if they were created in response to a transient peak in the number of RPC calls.
Each packet contains a process identifier for both source and destination. In packets from the caller machine, the source process identifier is the calling process. In packets from the callee machine, the source process identifier is the server process handling the call. During a call, when a process transmits a packet it sets the destination process identifier in the packet from the source process identifier in the preceding packet of the call. If a process is waiting for the next packet in a call, the process notes this fact in a (simple) data structure shared with our Ethernet interrupt handler. When the interrupt handler receives an RPC packet, it looks at the destination process identifier. If the corresponding process on this machine is at this time waiting for an RPC packet, then the incoming packet is dispatched directly to that process. Otherwise, the packet is dispatched to an idle server process (which then decides whether the packet is part of a current call requiring an acknowledgment, the start of a new call that this server process should handle, or a duplicate that may be discarded). This means that in most cases an incoming packet is given to the process that wants it with one process swap. (Of course, these arrangements are resilient to being given an incorrect process identifier.) When a calling activity initiates a new call, it attempts to use as its destination the identifier of the process that handled the previous call from that activity. This is beneficial, since that process is probably waiting for an acknowledgment of the results of the previous call, and the new call packet will be sufficient acknowledgment. Only a slight performance degradation will result from the caller using a wrong destination process, so a caller maintains only a single destination process for each calling process.
In summary, the normal sequence of events is as follows: A process wishing to make a call manufactures the first packet of the call, guesses a plausible value for the destination process identifier and sets the source to be itself. It then presents the packet to the Ethernet output device and waits for an incoming packet. In the callee machine, the interrupt handler receives the packet and notifies an appropriate server process. The server process handles the packet, then manufactures the response packet. The destination process identifier in this packet will be that of the process waiting in the caller machine. When the response packet arrives in the caller machine, the interrupt handler there passes it directly to the calling process. The calling process now knows the process identifier of the server process, and can use this in subsequent packets of the call, or when initiating a later call.
The effect of this scheme is that in simple calls no processes are created, and there are typically only four process swaps in each call. Inherently, the minimum possible number of process swaps is two (unless we busy-wait)—we incurred the extra two because incoming packets are handled by an interrupt handler instead of being dispatched to the correct process directly by the device microcode (because we decided not to write specialized microcode).
3.6 Other Optimizations
The above discussion shows some optimizations we have adopted: we use subsequent packets for implicit acknowledgment of previous packets, we attempt to minimize the costs of maintaining our connections, we avoid costs of establishing and terminating connections, and we reduce the number of process switches involved in a call. Some other detailed optimizations also have significant payoff.
When transmitting and receiving RPC packets we bypass the software layers that correspond to the normal layers of a protocol hierarchy. (Actually, we only do so in cases where caller and callee are on the same network—we still use the protocol hierarchy for internetwork routing.) This provides substantial performance gains, but is, in a sense, cheating: it is a successful optimization because only the RPC package uses it. That is, we have modified the network-driver software to treat RPC packets as a special case; this would not be profitable if there were ten special cases. However, our aims imply that RPC is a special case: we intend it to become the dominant communication protocol. We believe that the utility of this optimization is not just an artifact of our particular implementation of the layered protocol hierarchy. Rather, it will always be possible for one particular transport level protocol to improve its performance significantly by bypassing the full generality of the lower layers.
There are reasonable optimizations that we do not use: we could refrain from using the internet packet format for local network communication, we could use specialized packet formats for the simple calls, we could implement special purpose network microcode, we could forbid non-RPC communication, or we could save even more process switches by using busy-waits. We have avoided these optimizations because each is in some way inconvenient, and because we believe we have achieved sufficient efficiency for our purposes. Using them would probably have provided an extra factor of two in our performance.
3.7 Security
Our RPC package and protocol include facilities for providing encryption-based security for calls. These facilities use Grapevine as an authentication service (or key distribution center) and use the federal data encryption standard [5]. Callers are given a guarantee of the identity of the callee, and vice versa. We provide full end-to-end encryption of calls and results. The encryption techniques provide protection from eavesdropping (and conceal patterns of data), and detect attempts at modification, replay, or creation of calls. Unfortunately, there is insufficient space to describe here the additions and modifications we have made to support this mechanism. It will be reported in a later paper.
4. PERFORMANCE
As we have mentioned already, Nelson’s thesis included extensive analysis of several RPC protocols and implementations, and included an examination of the contributing factors to the differing performance characteristics. We do not repeat that information here.
We have made the following measurements of use of our RPC package. The measurements were made for remote calls between two Droids connected by an
Table I. Performance Results for Some Examples of Remote Calls
<table>
<thead>
<tr>
<th>Procedure</th>
<th>Minimum</th>
<th>Median</th>
<th>Transmission</th>
<th>Local-only</th>
</tr>
</thead>
<tbody>
<tr>
<td>no args/results</td>
<td>1059</td>
<td>1097</td>
<td>131</td>
<td>9</td>
</tr>
<tr>
<td>1 arg/result</td>
<td>1070</td>
<td>1105</td>
<td>142</td>
<td>10</td>
</tr>
<tr>
<td>2 args/results</td>
<td>1077</td>
<td>1127</td>
<td>152</td>
<td>11</td>
</tr>
<tr>
<td>4 args/results</td>
<td>1115</td>
<td>1171</td>
<td>174</td>
<td>12</td>
</tr>
<tr>
<td>10 args/results</td>
<td>1222</td>
<td>1278</td>
<td>239</td>
<td>17</td>
</tr>
<tr>
<td>1 word array</td>
<td>1069</td>
<td>1111</td>
<td>131</td>
<td>10</td>
</tr>
<tr>
<td>4 word array</td>
<td>1106</td>
<td>1153</td>
<td>174</td>
<td>13</td>
</tr>
<tr>
<td>10 word array</td>
<td>1214</td>
<td>1250</td>
<td>239</td>
<td>16</td>
</tr>
<tr>
<td>40 word array</td>
<td>1643</td>
<td>1695</td>
<td>566</td>
<td>51</td>
</tr>
<tr>
<td>100 word array</td>
<td>2915</td>
<td>2926</td>
<td>1219</td>
<td>98</td>
</tr>
<tr>
<td>resume except'n</td>
<td>2555</td>
<td>2637</td>
<td>284</td>
<td>134</td>
</tr>
<tr>
<td>unwind except'n</td>
<td>3374</td>
<td>3467</td>
<td>284</td>
<td>196</td>
</tr>
</tbody>
</table>
Ethernet. The Ethernet had a raw data rate of 2.94 megabits per second. The Dorados were running Cedar. The measurements were made on an Ethernet shared with other users, but the network was lightly loaded (apart from our tests), at five to ten percent of capacity. The times shown in Table I are all in microseconds, and were measured by counting Dorado microprocessor cycles and dividing by the known crystal frequency. They are accurate to within about ten percent. The times are elapsed times: they include time spent waiting for the network and time used by interference from other devices. We are measuring from when the user program invokes the local procedure exported by the user-stub until the corresponding return from that procedure call. This interval includes the time spent inside the user-stub, the RPCRuntime on both machines, the server-stub, and the server implementation of the procedures (and transmission times in both directions). The test procedures were all exported to a single interface. We were not using any of our encryption facilities.
We measured individually the elapsed times for 12,000 calls on each procedure. Table I shows the minimum elapsed time we observed, and the median time. We also present the total packet transmission times for each call (as calculated from the known packet sizes used by our protocol, rather than from direct measurement). Finally, we present the elapsed time for making corresponding calls if the user program is bound directly to the server program (i.e., when making a purely local call, without any involvement of the RPC package). The time for purely local calls should provide the reader with some calibration of the speed of the Dorado processor and the Mesa language. The times for local calls also indicate what part of the total time is due to the use of RPC.
The first five procedures had, respectively, 0, 1, 2, 4 and 10 arguments and 0, 1, 2, 4 and 10 results, each argument or result being 16 bits long. The next five procedures all had one argument and one result, each argument or result being an array of size 1, 4, 10, 40 and 100 words respectively. The second line from the bottom shows a call on a procedure that raises an exception which the caller resumes. The last line is for the same procedure raising an exception that the caller causes to be unwound.
For transferring large amounts of data in one direction, protocols other than RPC have an advantage, since they can transmit fewer packets in the other
direction. Nevertheless, by interleaving parallel remote calls from multiple processes, we have achieved a data rate of 2 megabits per second transferring between Dorado main memories on the 3 megabit Ethernet. This is equal to the rate achieved by our most highly optimized byte stream implementation (written in BCPL).
We have not measured the cost of exporting or importing an interface. Both of these operations are dominated by the time spent talking to the Grapevine server(s). After locating the exporter machine, calling the exporter to determine the dispatcher identifier uses an RPC call with a few words of data.
5. STATUS AND DISCUSSIONS
The package as we have described it is fully implemented and in use by Cedar programmers. The entire RPCRuntime package amounts to four Cedar modules (packet exchange, packet sequencing, binding and security), totalling about 2,200 lines of source code. Lupine (the stub generator) is substantially larger. Clients are using RPC for several projects, including the complete communication protocol for Alpine (a file server supporting multimachine transactions), and the control communication for an Ethernet-based telephone and audio project. (It has also been used for two network games, providing real-time communication between players on multiple machines.) All of our clients have found the package convenient to use, although neither of the projects is yet in full-scale use. Implementations of the protocol have been made for BCPL, InterLisp, SmallTalk and C.
We are still in the early stages of acquiring experience with the use of RPC and certainly more work needs to be done. We will have much more confidence in the strength of our design and the appropriateness of RPC when it has been used in earnest by the projects that are now committing to it. There are certain circumstances in which RPC seems to be the wrong communication paradigm. These correspond to situations where solutions based on multicasting or broadcasting seem more appropriate [2]. It may be that in a distributed environment there are times when procedure calls (together with our language's parallel processing and coroutine facilities) are not a sufficiently powerful tool, even though there do not appear to be any such situations in a single machine.
One of our hopes in providing an RPC package with high performance and low cost is that it will encourage the development of new distributed applications that were formerly infeasible. At present it is hard to justify some of our insistence on good performance because we lack examples demonstrating the importance of such performance. But our belief is that the examples will come: the present lack is due to the fact that, historically, distributed communication has been inconvenient and slow. Already we are starting to see distributed algorithms being developed that are not considered a major undertaking; if this trend continues we will have been successful.
A question on which we are still undecided is whether a sufficient level of performance for our RPC aims can be achieved by a general purpose transport protocol whose implementation adopts strategies suitable for RPC as well as ones suitable for bulk data transfer. Certainly, there is no entirely convincing argument that it would be impossible. On the other hand, we have not yet seen it achieved.
We believe the parts of our RPC package here discussed are of general interest in several ways. They represent a particular point in the design spectrum of RPC. We believe that we have achieved very good performance without adopting extreme measures, and without sacrificing useful call and parameter semantics. The techniques for managing transport level connections so as to minimize the communication costs and the state that must be maintained by a server are important in our experience of servers dealing with large numbers of users. Our binding semantics are quite powerful, but conceptually simple for a programmer familiar with single machine binding. They were easy and efficient to implement.
REFERENCES
Received March 1983; revised November 1983; accepted November 1983
|
{"Source-Url": "http://web.cecs.pdx.edu/~black/CS410-ds/papers/implementing%20RPC.pdf", "len_cl100k_base": 12197, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 46459, "total-output-tokens": 13711, "length": "2e13", "weborganizer": {"__label__adult": 0.00032067298889160156, "__label__art_design": 0.00034546852111816406, "__label__crime_law": 0.00030922889709472656, "__label__education_jobs": 0.0009617805480957032, "__label__entertainment": 9.864568710327148e-05, "__label__fashion_beauty": 0.00014710426330566406, "__label__finance_business": 0.0003876686096191406, "__label__food_dining": 0.00028061866760253906, "__label__games": 0.0004949569702148438, "__label__hardware": 0.003009796142578125, "__label__health": 0.0004892349243164062, "__label__history": 0.0003690719604492187, "__label__home_hobbies": 0.00011157989501953124, "__label__industrial": 0.0005645751953125, "__label__literature": 0.00029969215393066406, "__label__politics": 0.00021660327911376953, "__label__religion": 0.0004892349243164062, "__label__science_tech": 0.1361083984375, "__label__social_life": 9.012222290039062e-05, "__label__software": 0.0198822021484375, "__label__software_dev": 0.833984375, "__label__sports_fitness": 0.00024068355560302737, "__label__transportation": 0.0007481575012207031, "__label__travel": 0.00021827220916748047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65326, 0.04308]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65326, 0.32672]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65326, 0.93091]], "google_gemma-3-12b-it_contains_pii": [[0, 2356, false], [2356, 5781, null], [5781, 9098, null], [9098, 12584, null], [12584, 16528, null], [16528, 18959, null], [18959, 22373, null], [22373, 25956, null], [25956, 27132, null], [27132, 30652, null], [30652, 33975, null], [33975, 37417, null], [37417, 40134, null], [40134, 43755, null], [43755, 45108, null], [45108, 48400, null], [48400, 52145, null], [52145, 55326, null], [55326, 58925, null], [58925, 62285, null], [62285, 65326, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2356, true], [2356, 5781, null], [5781, 9098, null], [9098, 12584, null], [12584, 16528, null], [16528, 18959, null], [18959, 22373, null], [22373, 25956, null], [25956, 27132, null], [27132, 30652, null], [30652, 33975, null], [33975, 37417, null], [37417, 40134, null], [40134, 43755, null], [43755, 45108, null], [45108, 48400, null], [48400, 52145, null], [52145, 55326, null], [55326, 58925, null], [58925, 62285, null], [62285, 65326, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65326, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65326, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65326, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65326, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65326, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65326, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65326, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65326, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65326, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65326, null]], "pdf_page_numbers": [[0, 2356, 1], [2356, 5781, 2], [5781, 9098, 3], [9098, 12584, 4], [12584, 16528, 5], [16528, 18959, 6], [18959, 22373, 7], [22373, 25956, 8], [25956, 27132, 9], [27132, 30652, 10], [30652, 33975, 11], [33975, 37417, 12], [37417, 40134, 13], [40134, 43755, 14], [43755, 45108, 15], [45108, 48400, 16], [48400, 52145, 17], [52145, 55326, 18], [55326, 58925, 19], [58925, 62285, 20], [62285, 65326, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65326, 0.09722]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
97f41c890383c6535160399bacd0ff68e6c9bcc9
|
[REMOVED]
|
{"Source-Url": "https://haslab.uminho.pt/pfsilva/files/fsen11.pdf", "len_cl100k_base": 14022, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 67626, "total-output-tokens": 15640, "length": "2e13", "weborganizer": {"__label__adult": 0.00036215782165527344, "__label__art_design": 0.00031113624572753906, "__label__crime_law": 0.0003817081451416016, "__label__education_jobs": 0.00033092498779296875, "__label__entertainment": 5.626678466796875e-05, "__label__fashion_beauty": 0.00014281272888183594, "__label__finance_business": 0.0001838207244873047, "__label__food_dining": 0.00040602684020996094, "__label__games": 0.0005636215209960938, "__label__hardware": 0.0008802413940429688, "__label__health": 0.0005092620849609375, "__label__history": 0.00021505355834960935, "__label__home_hobbies": 7.891654968261719e-05, "__label__industrial": 0.0004749298095703125, "__label__literature": 0.0002036094665527344, "__label__politics": 0.0002694129943847656, "__label__religion": 0.00051116943359375, "__label__science_tech": 0.019866943359375, "__label__social_life": 6.943941116333008e-05, "__label__software": 0.0044097900390625, "__label__software_dev": 0.96875, "__label__sports_fitness": 0.00033664703369140625, "__label__transportation": 0.0004837512969970703, "__label__travel": 0.00021088123321533203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57319, 0.01915]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57319, 0.42187]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57319, 0.83892]], "google_gemma-3-12b-it_contains_pii": [[0, 2819, false], [2819, 6307, null], [6307, 9551, null], [9551, 12955, null], [12955, 15943, null], [15943, 16382, null], [16382, 19421, null], [19421, 23095, null], [23095, 26404, null], [26404, 29979, null], [29979, 33508, null], [33508, 36274, null], [36274, 39317, null], [39317, 42606, null], [42606, 45814, null], [45814, 49318, null], [49318, 52858, null], [52858, 56147, null], [56147, 57319, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2819, true], [2819, 6307, null], [6307, 9551, null], [9551, 12955, null], [12955, 15943, null], [15943, 16382, null], [16382, 19421, null], [19421, 23095, null], [23095, 26404, null], [26404, 29979, null], [29979, 33508, null], [33508, 36274, null], [36274, 39317, null], [39317, 42606, null], [42606, 45814, null], [45814, 49318, null], [49318, 52858, null], [52858, 56147, null], [56147, 57319, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57319, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57319, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57319, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57319, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57319, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57319, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57319, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57319, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57319, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57319, null]], "pdf_page_numbers": [[0, 2819, 1], [2819, 6307, 2], [6307, 9551, 3], [9551, 12955, 4], [12955, 15943, 5], [15943, 16382, 6], [16382, 19421, 7], [19421, 23095, 8], [23095, 26404, 9], [26404, 29979, 10], [29979, 33508, 11], [33508, 36274, 12], [36274, 39317, 13], [39317, 42606, 14], [42606, 45814, 15], [45814, 49318, 16], [49318, 52858, 17], [52858, 56147, 18], [56147, 57319, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57319, 0.08766]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
fbce35d05b18c8928b1d75a70fbcbb5a0782cd58
|
Self-managed collections: Off-heap memory management for scalable query-dominated collections
Fabian Nagel
University of Edinburgh, UK
F.O.Nagel@sms.ed.ac.uk
Gavin Bierman
Oracle Labs, Cambridge, UK
Gavin.Bierman@oracle.com
Aleksandar Dragojevic
Microsoft Research, Cambridge, UK
alekd@microsoft.com
Stratis D. Viglas
University of Edinburgh, UK
sviglas@inf.ed.ac.uk
ABSTRACT
Explosive growth in DRAM capacities and the emergence of language-integrated query enable a new class of managed applications that perform complex query processing on huge volumes of data stored as collections of objects in the memory space of the application. While more flexible in terms of schema design and application development, this approach typically experiences sub-par query execution performance when compared to specialized systems like DBMS.
To address this issue, we propose self-managed collections, which utilize off-heap memory management and dynamic query compilation to improve the performance of querying managed data through language-integrated query. We evaluate self-managed collections using both microbenchmarks and enumeration-heavy queries from the TPC-H business intelligence benchmark. Our results show that self-managed collections outperform ordinary managed collections in both query processing and memory management by up to an order of magnitude and even outperform an optimized in-memory columnar database system for the vast majority of queries.
1. INTRODUCTION
This work follows two recent trends in data management and query processing: language-integrated query and ever-increasing memory capacities.
Language-integrated query is the smooth integration of programming and database languages. The impedance mismatch between these two classes of languages is well-known, but recent developments, notably Microsoft’s LINQ and, to a lesser extent, parallel streams and lambdas in Java, enrich the host programming language with relational-like query operators that can be composed to construct complex queries. Of particular interest to this work is that these queries can be targeted at both in-memory and external database data sources.
Over the last two decades, DRAM prices have been dropping at an annual rate of 33%. As of September 2016, servers with a DRAM capacity of more than 1TB are available for under US$50k. These servers allow the entire working set of many applications to fit into main memory, which greatly facilitates query processing as data no longer has to be continuously fetched from disk (e.g., via a disk-based external data management system); instead, it can be loaded into main memory and processed there, thus improving query processing performance.
Granted, the use-case of a persistent (database) and a volatile (application) representation of data, coupled with a thin layer to translate between the two is how programmers have been implementing applications for decades and will certainly not go away for all existing legacy applications that are in production. Combining, however, the trends of large memories and language-integrated query is forward-looking and promises a novel class of new applications that store huge volumes of data in the memory space of the application and use language-integrated query to process the data, without having to deal with the duality of data representations. This promises to facilitate application design and development because there is no longer a need to setup an external system and to deal with the interoperability between the object-oriented application and the relational database system.
Consider, for example, a business intelligence application that, on startup, loads a company’s most recent business data into collections of managed objects and then analyses the data using language-integrated query. Such applications process queries that usually scan most of the application data and condense it into a few summarizing values that are then returned to the user; typically presented as interactive GUI elements such as graphs, diagrams or tables. These queries are inherently very expensive as they perform complex aggregation, join and sort operations, and thus dominate most other application costs. Therefore, fast query processing for language-integrated query is imperative.
Unfortunately, previous work [12, 13] has already shown that the underlying query evaluation model used in many language-integrated query implementations, e.g., C#’s LINQ-to-objects, suffers from various significant inefficiencies that hamper performance. The most significant of these is the cost of calling virtual functions to propagate intermediate
result objects between query operators and to evaluate predicate and selector functions in each operator. Query compilation has been shown to address these issues by dynamically generating highly optimized query code that is compiled and executed to evaluate the query. Previous work [13] also observed that the cost of performing garbage collections and the memory layout of the collection data which is imposed by garbage collection further restricts query performance. This issue needs to be addressed to make this new class of applications feasible for application developers.
Our solution to address these inefficiencies is to use self-managed collections (SMCs), a new collection type that manages the memory space of its objects in private memory that is excluded from garbage collection. SMCs exhibit different collection semantics than regular managed collections. This semantics is derived from the table type in databases and allows SMCs to automatically manage the memory layout of contained objects using the underlying type-safe manual memory management system. SMCs are optimized to provide fast query processing performance for enumeration-heavy queries. As the collection manages the memory layout of all contained objects and is aware of the order in which they are accessed by queries, it can place them accordingly to better exploit spatial locality. Doing so improves the performance of enumeration-heavy queries as CPU and compiler prefetching is better utilized. This is not possible when using automatic garbage collection as the garbage collector is not aware of collections and their content. Objects may be scattered all over the managed heap and the order they are accessed may not reflect the order in which they are stored in memory. SMCs are designed with query compilation in mind and allow the generated code low-level access to contained objects, thus enabling the generation of more efficient query code. On top of this, SMCs reduce the total garbage collection overhead by excluding all contained objects from garbage collection. With applications storing huge volumes of data in SMCs, this further improves application performance and scalability.
The remainder of this paper is organized as follows. In §2, we provide an overview of SMCs and their semantics before presenting a type-safe manual memory management system in §3. In §4, we introduce SMCs and show how they utilize our manual memory manager to improve query processing performance compared to regular collections that contain managed objects. Finally, we evaluate SMCs in §7 using microbenchmarks as well as some queries from the TPC-H benchmark. We conclude this work in §9.
2. OVERVIEW
SMCs are a specialized collection type designed to provide improved query processing performance compared to regular managed collections for application data accessed predominantly by language-integrated queries. This performance improvement may come at the expense of the performance of other access patterns (e.g., random access). SMCs are only meant to be used with data that is dominantly accessed in queries.
SMCs have a new semantics: they own their contained objects and hence the collection itself determines the lifetime of the objects. In other words, objects are created when they are inserted into the collection and their lifetime ends with their removal from the collection. This accurately models many use cases, as objects often are not relevant to the application once they are removed from their host collection. Consider, for example, a collection that stores products sold by a company. Removing a product from the collection usually means that the product is no longer relevant to any other part of the application. Managed applications, on the other hand, keep objects alive so long as they are still referenced. This means that a reference to an object that will never be touched again prevents the runtime from reclaiming the object’s memory. Object containment is inspired by database tables, where removing a record from a table entirely removes the record from the database.
The following code excerpt illustrates how the Add and Remove methods of SMCs are used:
```csharp
Collection<Person> persons = new Collection<Person>();
Person adman = persons.Add("Adam", 27);
/* ... */
persons.Remove(adman);
```
The collection’s Add method allocates memory for the object, calls the object’s constructor, adds the object to the collection and returns a reference to the object. As the lifetime of each object in the collection is defined by its containment in the collection, mapping the collection’s Add and Remove methods to the alloc and free methods of the underlying memory manager is straightforward. When the adman object is removed from the collection, it is gone; but it may still be referenced by other objects. Our semantics requires that all references to a self-managed object implicitly become null after removing the object from its host collection; dereferencing them will throw a NullReferenceException.1
SMCs are intended for high-performance query processing of objects stored in main memory. To achieve this, they leverage query compilation [12, 13] and support bag semantics which allows the generated queries to enumerate a collection’s objects in memory order. In order to exclude SMCs from garbage collection we have to disallow collection objects to reference managed objects. We enforce this by introducing the tabular class modifier to indicate classes backed by SMCs and statically ensure that tabular classes only reference other tabular classes. Strings referenced by tabular classes are considered part of the object; their lifetime matches that of the object, thereby allowing the collection to reclaim the memory for the string when reclaiming the object’s memory. We further restrict SMCs not to be defined on base classes or interfaces, to ensure that all objects in a collection have the same size and memory layout.
In contrast to regular managed collection types like List<T>, our collection types require a deeper integration with the managed runtime. As collections allocate and free memory for the objects they host, we introduce an off-heap memory system to the runtime that provides type, memory and thread safety. The alloc and free methods of the memory system are part of the runtime API and are called by the collection implementation as needed. The type safety guarantees for tabular types are not the same as for automatically managed ones. We guarantee that a reference always refers to an instance of the same type and that this instance is either the one that was assigned to the reference or, if the instance has been removed from the collection, null. This differs from automatically managed types that
1This suggests that an ownership type system could be useful to statically guarantee such exceptions are not raised; but we leave this to future work.
guarantee that a reference points to the object it was assigned to for as long as the reference exists and refers to that object. To ensure type-safe reference accesses, we store additional information with each reference and perform extra checks when accessing an object. For managed types, references are translated into pointer-to-memory addresses by the just-in-time (JIT) compiler. As the logic for tabular types is more complex, we modify the JIT compiler to make it aware of tabular type references and the code that must be produced when dereferencing them.
We use query compilation to transform LINQ queries on SMCs into query functions that process the query. To improve query performance, the generated code directly operates on the collection’s memory blocks (using unsafe, c-style pointers). All objects in the collections are stored in memory blocks that are private to the collections. Note that these blocks are not accessible outside the collection and the code generator. We assume that the structure of most LINQ queries is statically defined in the application’s source code with only query parameters (e.g., a constant in a selection predicate) dynamically assigned. We modify the C# compiler to automatically expand all LINQ queries on SMCs to calls to automatically generated imperative functions that contain the same parameters as arguments. Queries that are dynamically constructed at run-time, can be dealt with using a LINQ query provider as in [13]. The generated imperative query code processes the query as in [13], but on top of SMCs that enable direct pointer access to the underlying data.
3. TYPE-SAFE MANUAL MEMORY MANAGEMENT
Our manual memory management system is purpose-built for SMCs. It leverages various techniques to allow SMCs to manually manage contained objects and to provide fast query processing.
3.1 Type stability and incarnations
The memory manager allocates objects from unmanaged memory blocks, where each block only serves objects of a certain type. By only storing objects of a certain type in each block and disallowing variable-sized objects to be stored in-place we ensure that all object headers in a block remain at constant positions within that block, even after freeing objects and reusing their memory for new ones. We align the base address of all blocks to the block size to allow extracting the address of the block’s header from the object pointer. This allows us to store type-specific information like vtable pointers only on per block rather than with every object. We refer to the memory space in a block that is occupied by an object as the object’s memory slot. Object headers contain a 32-bit incarnation number. We use incarnations to ensure that objects are not accessed after having been freed. For each slot, the incarnation number is initialized to zero and incremented whenever an object is freed. References to objects store the incarnation of the object together with its pointer. Before accessing the object’s data, the system verifies that the incarnation number of the reference matches that in the object’s header and only then allows access to the object [1]. If the application tries to access an object that has been freed (i.e., non matching incarnation numbers), then the system raises a null reference exception. The JIT compiler injects these checks when dereferencing a manually managed object. We do not expect incarnation numbers to overflow in the lifetime of a typical application, but if overflows should occur, we stop reusing these memory slots until a background thread has scanned all manually managed objects and has set all invalid references to null. Single-type memory blocks combined with incarnation numbers ensure type-safe manual memory management as defined in §2.
3.2 Memory layout
We illustrate the memory layout of our approach in Figure 1. We do not store a pointer to an object’s memory slot in its reference, but instead use a level of indirection. We will require this for the compaction schemes of §5. The pointer stored in object references points to an entry in the global indirection table which, in turn, contains a pointer to the object’s memory slot. We store the incarnation number associated with an object in its indirection table entry rather than its memory slot. This allows us to reuse empty indirection table entries and memory blocks for different types without breaking our type guarantees.
As shown in Figure 1, each data block is divided into four consecutive memory segments: block header, object store, slot directory, and back-pointers. The object store contains all object data. Each object’s data is accessible through a pointer from the corresponding indirection table entry or through the identifier of the object’s slot in the block. The slot directory stores the state of each slot and further state-related information (for a total of 32 bits). Each slot can be in one of three states: free i.e., the slot has never been used before, valid, i.e., it contains object data, or limbo i.e., the object has been removed, but its slot has not been reclaimed yet. Back-pointers are required for query processing and for compaction; they store a pointer to the object’s indirection table entry. The slot directory entry and the back-pointer are accessible using the object’s slot identifier.
3.3 Memory contexts
We have so far grouped objects of the same type in blocks private to that type. In many use cases, certain object types exhibit spatial locality: objects of the same collection are more likely to be accessed in close proximity. Memory contexts allow the programmer to instruct the allocation function to allocate objects in the blocks of a certain context (e.g., a collection). The memory blocks of a context only contain objects of a single type and only the ones that have been allocated in that specific memory context.
3.4 Concurrency
Incarnation numbers protect references from accessing objects that have been freed. However, they do not protect objects from being freed and reused while being accessed. Consider Figure 2: Thread 2 frees and reuses the memory
slotted referenced by the \texttt{adam} reference just after \texttt{Thread 1} successfully checked the incarnation numbers for the same
object. As \texttt{Thread 1}'s incarnation number check was successful, the
thread accesses the object, which is now no longer \texttt{Adam},
but \texttt{Tom}. This behavior violates the type-safety requirement
of always returning the object assigned to a reference, or
\texttt{null} if the referenced object has been freed. We refine
the requirement for the concurrent case by specifying the check
of the incarnation numbers to be the point in time where
the requirement must hold. Thus, all accesses to objects
are valid as long as the incarnation numbers matched at the
time they were checked. To enforce the type-safety require-
ment, the memory manager ensures that if an object is freed,
its memory slot cannot be reused for a new object until all
concurrent threads have finished accessing that object.
We use a variation of epoch-based reclamation \cite{7} to en-
sure thread safety. In epoch-based reclamation, threads ac-
cess shared objects in grace periods (critical sections). The
memory space of shared objects can only be reclaimed once
all threads that may have accessed the object in a grace
period have completed this grace period. Thus, grace peri-
odes are the time interval during which a thread can access objects
without re-checking their incarnation numbers to en-
sure type safety. Epochs are time intervals during which all
threads pass at least one grace period. The system main-
tains a global epoch; each thread maintains its thread-local
epoch. In Figure 3, we show how we track epochs. Upon
entering a critical section (grace period), each thread sets
its thread-local epoch to the current global epoch. To leave
a critical section, a thread can increment the global epoch if
all other threads that currently are in critical sections have
reached the current global epoch. Hence, threads can either
be in the global epoch \( e \) or in \( e - 1 \). Memory freed in some
global epoch \( e \) can safely be reclaimed in epoch \( e + 2 \) because
by that time, no concurrent thread can still be in epoch \( e \).
To implement epoch-based reclamation, the JIT compiler
automatically injects code to start and end critical sections
when dereferencing manually managed objects. Critical sec-
tions are not limited to a single reference access; several
accesses can be combined into a single critical section to
amortize the overhead of starting and ending critical sec-
tions. The following illustrates the code to start and end a
critical section:
\begin{verbatim}
void enter_critical_section() {
global->sectionCtx[threadId].epoch = global->epoch;
global->sectionCtx[threadId].inCritical = 1;
}
\end{verbatim}
\begin{verbatim}
void exit_critical_section() {
memory_fence();
global->sectionCtx[threadId].inCritical = 0;
}
\end{verbatim}
Upon entering a critical section, each thread sets its local
ePOCH to the current global epoch and sets a flag to indicate
that the thread is currently in a critical section; on exit the
thread clears this flag. We have to enforce compiler and CPU
instruction ordering around these instructions to ensure that
the session context is set before we access the object and not
unset until we have finished, hence, the memory fences. In
contrast to \cite{7}, we do not increment global epochs \textit{modulo}
three, but as a continuous counter. We also do not increment
the global epoch and reclaim memory when exiting critical
sections, but in the memory manager’s allocation function.
This allows us to lazily reclaim memory on demand when
allocating new objects.
\subsection{3.5 Memory operations}
When freeing an object, we increment its incarnation num-
ber to prevent subsequent accesses to it. We refer to memory
slots that are freed, but not yet available for reuse as \textit{limbo}
slots. We set the memory slot’s state to limbo and set its
removal timestamp to the current global epoch in the slot
directory. This bookkeeping ensures that the slot cannot
be reclaimed until at least two epochs have passed. Mem-
ory blocks become candidates for reclamation when they
surpass a threshold fraction of limbo slots, the reclamation
threshold. If this is the case, we add the block to a queue
of same-type memory blocks that may be reclaimed, along
with the earliest timestamp when the block can be reclaimed
(global epoch plus two).
All allocations are performed from thread-local blocks so
that only one thread allocates slots in a block at a time
(though there can be concurrent removals from the same
block). Thread-local blocks are taken from the reclamation
queue of the appropriate type if there are blocks ready for
reclamation; if the queue is empty they are allocated from
the unmanaged heap. To find a memory slot for a new
object the allocation function scans all entries in the slot
directory from the slot of the last allocation until either a
free slot or a reclaimable limbo slot is found. The maximum
number of slots scanned before finding a limbo slot that can
be reclaimed depends on the reclamation threshold. For
instance, if blocks can host one hundred objects and are
added to the queue once they contain more than 5% limbo
slots, then each allocation scans at most twenty slots to find
a reclaimable limbo slot. The actual number is likely to be
smaller as removals might have happened in the meantime.
The allocation function attempts to increment the global
epoch counter once there are blocks in the reclamation queue
that cannot be reclaimed yet because two epochs have not
passed.
\section{4. SELF-MANAGED COLLECTIONS}
SMCs use the type-safe memory management described in
\S3 and support the semantics of \S2. The objects contained
in an SMC are managed by the collection itself and not by
the garbage collector. This, along with bag semantics, en-
ables SMCs to place objects in memory based on the order
the objects are touched when enumerating the collection’s
content in a query. This improves the locality of memory ac-
Figure 3: Epoch-based memory reclamation


cesses when enumerating the SMC, leading to improved performance compared to iterating over the collection’s content through references that may point anywhere in the managed heap (as is the case for all conventional .NET collections). A convenient side-effect of disallowing SMCS to contain standard objects is that it significantly reduces the size of the managed heap and the volume of memory that has to be scanned during garbage collection and, in consequence, the duration of garbage collection, which improves the overall performance of the application.
SMCS use the type-safe memory manager of §3 to manage contained objects. The semantics of SMCS mean that the Add and Remove methods can directly be mapped to the memory manager’s alloc and free methods. In addition to allocating memory for the object, the Add method calls the object’s constructor and returns a reference to the object.
Each SMC has a private memory context to allocate all objects added to the collection. This ensures that all objects in an SMC end up in the same set of private memory blocks. The SMC can access all of these blocks through the memory context. Recall from §2 that we automatically transform LINQ queries over SMCS into calls to specialized query functions that use query compilation to improve the performance of query processing. By giving the SMC access to these memory blocks, we also allow the query compiler to access them to enumerate over the SMC’s objects. The following illustrates a simple compiled query that enumerates over all objects in the SMC by iterating over all valid slots in all blocks in the SMC’s memory context, checking a predicate on the age field, and returning references to all qualifying objects:
```csharp
enter_critical_section();
foreach (Block* blk in collection.GetMemoryContext())
foreach (Slot i in blk)
if ((blk->slots[i] == VALID)
if ((blk->data[i].age > 17))
yield new ObjRef { ptr = blk->backptr[i],
inc = blk->backptr[i]++inc };
exit_critical_section();
```
The query uses the memory block’s slot directory blk->slots to check if the corresponding memory slot contains a valid object (in contrast to a free or limbo slot). As each entry in the slot directory is only four bytes wide and stored in a consecutive memory area, it is fairly cheap to iterate over the slot directory to check for valid slots. The query touches the object’s data only if the slot is valid. If the slot also satisfies the selection predicate, the query returns a reference (ObjRef) to the object. To do so, it uses the back-pointer field blk->backptr to obtain a pointer to the corresponding indirection table entry. The reference contains this pointer and the current incarnation number of the object to ensure that the memory slot can safely be reclaimed once the object is removed from the SMC. To generate code for more complex queries we follow a similar strategy as in previous work [10, 12, 13, 14].
To ensure that the accessed objects are not removed and their memory slot is not reclaimed while directly accessing objects in a query, we have to be in a critical section. This applies to objects in the primary SMC that we enumerate as well as to objects in other SMCS that we access through references from the primary SMC. Instead of entering and exiting a critical section around each object access, we process huge chunks of data in the same critical section. This amortizes the cost of critical sections (in particular, memory fences) and, hence, is a cornerstone of providing good query performance. The query remains in the same critical section either for its entire duration, or for the duration of processing a single memory block. The query compiler chooses the desired granularity for each query based on the requirements of the query. Staying in the same critical section for the duration of the query allows to generate code that stores direct pointers to the memory locations of SMC objects in intermediate results and data structures (otherwise the query may only use object references). However, it also increases the time until the memory manager can increment the global epoch to reclaim limbo slots. As LINQ queries are lazily evaluated, we enforce that critical sections are exited before a result object is returned and, hence, control is returned to the application. Since queries often contain several blocking operations (e.g., aggregation or sorting), most query processing is performed in a single critical section. Objects that are concurrently removed from an SMC while a query enumerates the SMC’s content are included in the query’s result if: (a) the query reads the object’s slot directory entry before the slot is set to limbo, or (b) the query follows a reference to the object before its incarnation number is incremented. Objects added to an SMC behave accordingly. SMCS use a lower isolation level than database systems, in line with other managed collections.
### 4.1 Columnar storage
While SMCS manage the memory space of contained objects themselves, they keep the memory layout of the object’s data unchanged. Previous work in database systems, e.g., [2], has shown that some workloads, however, greatly benefit from a columnar layout, instead of the row-wise layout of SMCS. Since SMCS store all object data in blocks that only contain objects from the same collection and, hence, the same type, they can be easily extended to leverage a columnar layout. The only requirements are that: (a) the JIT compiler injects the code required to access columnarily stored data when following references to such objects, and (b) the query compiler is aware of the data layout and also generates code that accesses the data in a columnar fashion.
For columnar layouts, we store the object’s block and slot identifiers in the object’s indirection table entry instead of a pointer to the object’s memory location. To access the data of an object, we look up its memory block using an array of memory blocks indexed by their block identifier, and then use the slot identifier to find the position of the value in its column.
### 5. COMPETITION
Common uses of SMCS do not cause them to shrink significantly; they stay at a stable size or grow steadily. However, when facing heavy shrinkage of an SMC, we perform compaction to reduce the SMC’s memory footprint and improve query performance. When relocating objects as part of a compaction, we have to ensure that concurrent accesses to them do not exhibit inconsistencies. Inconsistencies may arise from accesses through references or from queries directly operating on the SMC’s memory blocks.
#### 5.1 Reference access
The indirection table allows us to move data objects within and across memory blocks without having to update all ref-
The global epoch cannot be increased in the meantime because the thread is still in a critical section using the thread-local epoch. If compaction is necessary we set the global epoch to e + 2 by this point, exits its critical section to allow other threads to increment the global epoch, and goes back to sleep. Figure 4 illustrates the steps to move an object inside a memory block.
If an object’s incarnation number is not frozen there is no risk of it being moved in the current epoch, so all threads can access it as before. Note that the incarnation number comparison that we have to do anyway is enough to cover the most common path. If we encounter a frozen incarnation number (i.e., the first incarnation number comparison fails, but a second that excludes frozen and lock bits succeeds), there are three cases: (a) We are in the freezing epoch. There will not be any relocation in this epoch, so we can return the data pointer. (b) We are in the waiting phase of the relocation epoch and not all threads are in the relocation epoch yet. A relocation might happen while we access the object so we cannot proceed. However, we also cannot relocate the object because not all threads are in the relocation phase so they do not expect relocations yet. Our only option is to bail out from relocating the object. To do so, we find the object’s entry in the block’s relocation list, atomically set the lock bit in the object’s incarnation number, set the status of the relocation to failed (in the object’s relocation list entry), and unset the freeze and lock bits. If the lock bit has already been set by another thread, we spin until it is unset and then recheck the object’s status. Once the freeze bit is removed, we can return the pointer and proceed. (c) We are in the moving phase of the relocation epoch and all other threads are also in the relocation epoch. We again cannot proceed because the object may be moved at any time, but we can help the compaction thread move the object to its new location and then proceed. To do so, we find the object’s entry in the block’s relocation list, atomically set the lock bit in the object’s incarnation number, move the object to its new location, set the status of the relocation to succeeded, and unset the freeze and lock bits. As in the previous case, we spin if the bit is locked, then recheck its status and finally return the pointer after the frozen bit is unset. The following outlines the checks that have to be performed before accessing a manually managed objects through its reference:
\[ \text{freezing epoch: } e + 1 \text{ to start the relocation epoch. The relocation epoch consists of two phases: the waiting phase, which lasts until the compaction thread observes that all other threads are in the relocation epoch, and the moving phase that starts thereafter. While waiting, the compaction thread continuously tries to increment the global epoch to proceed to the moving phase. Once in the moving phase the compaction thread makes this phase globally visible by setting a global variable to indicate that frozen objects may now be moved. It then iterates over all blocks scheduled for compaction. For every slot to be moved, it atomically locks the incarnation number by setting the lock bit and copies the object to the new location, updates the pointer in the indirection table, unsets the lock and freeze bits, and marks the relocation as successful in the block’s relocation list. Once all scheduled relocations are done, the compaction thread increments the global epoch to } e + 3 \text{ (all threads are guaranteed to be at } e + 2 \text{ by this point), exits its critical section to allow other threads to increment the global epoch, and goes back to sleep. Figure 4 illustrates the steps to move an object inside a memory block.} \]
void dereference_object(ObjRef oref) {
if(oref.inc == oref.ptr->inc) {
return oref.ptr->memptr;
} else if (oref.inc == (oref.ptr->inc & FL_MASK)) {
// First case:
if (global->sectionCtx[threadID].epoch != global->nextRelocationEpoch) {
return oref.ptr->memptr;
} // Second case:
} else if (global->inMovingPhase) {
bail_out_relocation(oref);
return oref.ptr->memptr;
} // Third case:
else {
relocate_object(oref);
return oref.ptr->memptr;
}
} else {
throw new NullPointerExeception();
}
Note that outside freeze and relocation epochs, the first condition is always satisfied if the referenced object has not been freed. If the object access is known to be read-only, we can always use the original location of the object in the waiting phase of the relocation epoch as its memory location cannot be reclaimed while we access it. In this case, the reader does not have to fail the relocation of that object.
When the compaction thread starts iterating over the blocks to be compacted (i.e., the moving phase of the relocation epoch), all failed relocations are visible so the thread can deal with them. If necessary, it extends compaction by one additional epoch to try all unsuccessful relocations again by adding another freezing phase at the end of the relocation epoch and setting the following epoch to be a relocation epoch before exiting the current relocation epoch.
5.2 Block access
Queries directly operating on the memory blocks of an SMC can also cause inconsistencies where the query misses some objects because they are concurrently being relocated or includes them twice. To prevent these inconsistencies, we have to extend the compaction scheme described thus far. We always empty the memory blocks that take part in the compaction by moving their objects to new memory blocks and removing the emptied blocks from the collection. Blocks only participate in a compaction if their occupancy is below a threshold (e.g., 30%). Blocks that participate in a compaction are assigned to compaction groups where the objects of all blocks in a compaction group are moved to the same new block. The number of blocks in a compaction group depends on the aforementioned threshold; a 30% threshold results in three blocks per group.
Queries process all blocks of a compaction group in the same thread-local epoch and in consecutive order. This ensures consistent query behavior outside relocation epochs as relocations may not start while processing the compaction group. During relocation epochs, we have to ensure that queries may either only access the pre-relocation state of a compaction group or the post-relocation state. If processing of a compaction group starts in the moving phase of the relocation epoch, the query first helps performing the relocation of the compaction group and then uses the compacted memory block for query processing. If processing of the group starts in the waiting phase, we cannot compact the group’s content yet. In this case, we add the group to a list of groups that still have to be processed and continue with the remaining memory blocks. Once all remaining blocks are exhausted, we check if the moving phase has already started and, if this is the case, process all remaining compaction groups by first performing the relocation and then processing the compacted block. If the moving phase has not started yet, we process the compaction group in its pre-relocation state by atomically incrementing a query counter in the compaction group that prevents other threads from compacting the group until the query decremented the counter again. Relocations only occur in the moving phase of the relocation epoch and, hence, once a relocation waits for the query counter of a compaction group to become zero, there are no more queries incrementing it. The compaction thread bails out of compacting a certain group after waiting for a predefined amount of time for the read lock to be released. We do this to deal with queries that return control to the application (i.e., return a result element) while holding the read lock.
6. DIRECT POINTERS
When a query touches an object that contains many references to nested objects, then SMCs may loose ground to automatically managed collections: each dereference not only has to check incarnation numbers, but, more importantly, it has to pay for an additional (random) memory access to the indirection table. We now provide an alternative implementation that solves this problem. We keep indirection for all external references, but, for references between SMCs, we store the direct pointer to the corresponding memory location. To be able to check incarnation numbers in both cases, the incarnation number of a memory slot is moved back into the memory slot (object header) instead of the indirection table. In Figure 5 we show the new layout, which improves query performance for queries that use references to access objects from several SMCs.
When relocating an object, however, the new memory location of the object now has to be updated in the indirection table as well as in all self-managed objects that reference it, which is no longer an atomic operation. We address this by adding a third flag to the incarnation number, the forwarding flag. The forwarding flag turns the object’s old memory slot into a tombstone. Queries reaching the tombstone through direct pointers use the slot’s back-pointer to access the object’s indirection table entry which contains a
pointer to its new memory slot. To improve the performance of future accesses to this object, the query also updates the direct pointer to the object’s new memory location. The forwarding flag is set by the thread relocating the object after completing the relocation in the same atomic operation that unsets the frozen and lock bits; hence, tombstones cannot be reached through (indirect) references. As was the case for the two other flags, checking the forwarding flag is performed during incarnation number checking and, hence, does not penalize the common case of an unset forwarding flag.
Tombstoned memory slots are not reclaimed until there are no more direct pointers to them. After compacting an SMC, the compaction thread scans all SMCs that have direct pointers to it and updates the pointers to relocated objects. Note that the references between SMCs are statically known and the compiler can produce specialized functions that only scan SMCs that have direct pointers that may have to be updated and only check the corresponding pointer fields. We improve the performance of scanning an SMC to update direct pointers by only following pointers to memory slots that are known to have been relocated. This saves many random memory accesses. We achieve this by building a hash table during compaction that contains the memory addresses of all blocks that are compacted and, instead of following a direct pointer to see if the forwarding flag is set, we first compute the address of the corresponding block, probe it in the hash table and only follow the direct pointer if the block address was in the hash table.
7. EVALUATION
We implemented SMCs as a library using unsafe C\textsuperscript{\#} code.
We did not change the JIT-compiler to automatically inject the code for correctly dereferencing references to self-managed objects but added this code by hand to factor out any overhead. We implemented the code generation techniques of [13] and we did not use any query-specific optimizations. Our experimental setup was an Intel Core i7-2700K (4x3.5GHz) system with 16GB of RAM, running Windows 8.1 and .NET 4.5.2. We compare SMCs with the default managed collection types in C\textsuperscript{\#}. Unlike SMCs, most collections in C\textsuperscript{\#} are not thread-safe (e.g., List\textlt{\texttt{T}}, C\textsuperscript{\#}'s version of a dynamic array). Thread-safe collection types in C\textsuperscript{\#} are limited and only ConcurrentDictionary\textlt{\texttt{ TKey, TValue}} and ConcurrentBag\textlt{\texttt{T}} provide comparable functionality to SMCs; however, ConcurrentBag\textlt{\texttt{T}} does not allow the removal of specific objects. .NET supports two garbage collection modes: workstation and server. Both modes support either interactive (concurrent) or batch (non-concurrent) garbage collection. In our tests the server modes outperformed the workstation ones, so we only report results for the server mode and only report both concurrency settings if their results differ.
Our benchmarks are primarily based on an object-oriented adaptation of the TPC-H workload. We have chosen to focus on a database benchmark as we believe it exemplifies the class of large-scale analytics applications that will benefit from SMCs. A relational workload is the most typical example of an application that has traditionally offloaded ‘heavy’ data-bound computation to an optimized runtime for that data model (a relational DBMS). As such, it is a good indication of both the classes of queries that can be integrated in the programming language, while, at the same time, it can provide an immediate performance comparison to the dominant alternative. TPC-H tables map to collections and each record to an object composed of C\textsuperscript{\#}'s primitive types and references to other records (all primary-foreign-key relations). Based on the latter, most joins are performed using references. Unless stated otherwise, we use a scale factor of three for all TPC-H benchmarks. Note that due to a 16-byte-per-object overhead and larger primitive types (e.g., decimal is 16 bytes wide) in C\textsuperscript{\#}, a scale factor of three requires significantly more memory than in a database system.
Sensitivity to relocation threshold Recall from §3.4 that the data blocks of SMCs may contain limbo slots that cannot be reclaimed yet and that we use a tolerance threshold of such slots in a block that needs to be surpassed before adding the block to a reclamation queue. Varying this threshold affects the memory size, the cost of memory operations and the query performance of SMCs. In Figure 6 we show how these factors change when varying the threshold (normalized to the maximum value). As the percentage of unused limbo slots grows, so does the memory footprint of the collection. The cost of performing memory operations (i.e., insertions and removals) slowly decreases with an increasing threshold as allocations have to scan less memory slots to find a slot that can be claimed. Query performance seems to be less dependent on the additional slot directory entries that have to be processed with an increasing threshold, but more on the branch misprediction penalties when verifying if the slot is occupied. At a 50% threshold, the branch predictor has the most trouble predicting if the slot is occupied. Based on the results of Figure 6, we will use a 5% threshold for the following experiments. For a 5% threshold, the memory requirements of SMCs are comparable to that of storing managed objects in List\textlt{\texttt{T}}.
Memory allocation throughput In Figure 7 we compare the throughput (in objects per second) of allocating LineItem objects (using the default constructor) in an SMC to the pure allocation throughput of managed objects in
C. ... when following self-managed references. By utilizing
the direct pointers of §6, we can bypass this look-up and
performance.
.NET
and the throughput of allocating managed objects and
adding them to a concurrent collection. For managed allo-
cations we report the throughput for interactive and batch
garbage collection; the latter consistently provides better
performance. SMCs significantly outperform both managed
collections and the pure allocation cost of managed objects.
All objects remain reachable so the runtime performs nu-
merous garbage collections, with many of them stopping all
application threads to copy objects from younger to older
generations. SMCs allocate from (previously unused) thread-
local blocks, which reduces the synchronization overhead
of multiple allocation threads to about one atomic operation
per 10k lineitem allocations.
Refresh streams To measure the throughput of memory
operations we introduce the TPC-H refresh streams. Each
thread continuously runs one of two kinds of streams with
the same frequency. The first stream type creates and adds
lineitem objects (0.1% of the initial population) to the
directory collection. The second stream type enumerates
all elements in the directory collection and removes 0.1%
of the initial population based on a predicate on the ob-
ject’s orderkey value. All 0.1% objects to delete are pro-
vided in a hash map and removed in a single enumeration
over the collection. This benchmark represents the com-
mon use case of refreshing the data stored in SMCs. In
Figure 8 we report the stream throughput for SMCs against
ConcurrentDictionary<TKey, TValue>; ConcurrentBag<T>
is not included because it does not support the removal of
specific elements. SMCs perform better than both types of
managed collections in all cases.
Impact of garbage collection Out of the two garbage
collection settings reported in Figure 7, the (non-concurrent)
batch mode provides the higher throughput. In other garbage
collection intensive benchmarks, we found the batch mode
to enable a several times higher throughput. However, the
higher throughput comes at a price: response time. Where
concurrent collectors (interactive) can perform big parts of
garbage collection on a background thread without paus-
ing all application threads, non-concurrent collectors have to
pause all threads for the duration of the collection. As the
size of the managed heap grows, so does the duration of full
garbage collections and, hence, the application’s maximum
response time. To illustrate this, we insert a number of ob-
jects into a collection, either managed or self-managed, and
then start two threads in parallel. The first thread continu-
ously allocates managed objects with varying lifetimes and
the second continuously sleeps for one millisecond and mea-
sures the time that passed in the meantime. If it observes
that significantly more time has passed than expected, it
records the value as it most likely was caused by garbage
collection triggered by the other thread. Figure 9 shows the
maximum timeout measured for a varying number of objects
stored in the collection. For non-concurrent garbage collec-
tion, the maximum timeout increases with a growing number
of objects stored in a managed collection, but remains fairly
stable when these objects are stored in an SMC. Thus, the
duration of garbage collections increases with growing data
volumes stored in the managed heap. In the batch mode this
negatively impacts the responsiveness of the application; in
the interactive mode, it negatively impacts the overall ap-
plication performance as the background collection thread
steals processing resources from the application. In both
cases, SMCs scale better with increasing data volumes.
Enumeration performance We first report on the pure
enumeration performance of SMCs before considering more
complex queries. Our queries either: (a) enumerate the
lineitem collection and perform a simple function on each
object to ensure that all lineitem objects are accessed; or
(b) enumerate the lineitem collection, and for each object
follow the order reference to a customer object and perform
a simple function on the latter to ensure that customer ob-
jects are also accessed. Query performance deteriorates over
time as objects are added and removed from the collection.
In managed collections, objects may end up scattered all
over the managed heap, whereas in SMCs the blocks con-
taining objects may have holes due to limbo slots. In Fig-
ure 10 we show the performance of both query types after
the collections are freshly loaded (fresh) and after the collec-
tions have undergone numerous object removals and inser-
tions (worn). SMCs (indirect) outperform all automatically
managed collections. However, when performing nested ob-
ject accesses, the difference with List<T> diminishes be-
cause of the additional memory access required by indirec-
tion when following self-managed references. By utilizing
the direct pointers of §6, we can bypass this look-up and
improve performance. When comparing the fresh and worn states, SMCs only lose performance under nested accesses, whereas managed collections exhibit degraded performance in both cases. As ConcurrentDictionary<TKey, TValue> is the best performing thread-safe managed collection, we exclude ConcurrentBag<T> in what follows.
**Query processing** In Figure 11 we show the performance of the object-oriented adaptation of the first six TPC-H queries. For managed collections, we report the query performance of compiled C# code (as in [13] but with reference-based joins). Using LINQ to evaluate the queries instead of compiling them to C# code results in a 40% to 400% higher evaluation time, but as this was not the focus of the paper, we do not report it in Figure 11. We report on two versions of compiled code for SMCs: (a) Compiled C# code that, other than the enumeration code, is equivalent to the code used for managed collections. This illustrates the fraction of the overall improvement contributed by the better enumeration performance of SMCs. (b) Compiled unsafe C# code that contains optimizations only possible on SMCs. One such optimization is to use direct pointers to primitive types in an object (e.g., decimal values) as arguments to functions that operate on them (e.g., addition). For managed objects, these functions have to be called by value as the garbage collector may move the object inside the managed heap at any time without notice and, hence, the pointer would become invalid. Another optimization is to use memory regions [16] for all intermediate data during query processing, which improves performance by excluding those intermediates from garbage collection. Figure 11 reports the query processing performance relative to the performance of List<T>. SMCs perform significantly better than ConcurrentDictionary<TKey, TValue>, the fastest competing thread-safe collection in .NET, and even between 47% and 80% better than List<T>. Query 1 is a great example of what can be achieved with direct pointer access to self-managed objects. The query is decimal computation heavy and as C#’s decimal type is 16-bytes wide, calling the functions that perform decimal math using pointers and allowing for in-place modifications results in a huge performance gain. The other queries are less decimal computation intensive and, hence, show very little improvement from using unsafe code. Generating native C code leads to another 10% to 20% improvement over compiled unsafe C# code. But as the compiled code is (mostly) equivalent to the compiled unsafe C# code, any performance differences can be attributed to more aggressive code-level optimizations by the C compiler.
**Direct pointers and columnar storage** In Figure 12 we show the impact of the direct pointer optimization introduced in §6 and columnar storage as discussed in §4.1. Direct pointer moderately improve query performance for queries that contain joins, in particular for Query 5. Columnar storage shows further improvements that are enabled by the SMCs decoupling the memory layout of their elements from their definition through managing their own memory.
**Comparison to RDBMS** To put the SMC results into perspective, we compare the query performance over objects in SMCs to that of a modern commercial database system. We use SQL server 2014 for this purpose as it is well integrated into .NET and incorporates a compressed in-memory columnar store. We store all tables in the database’s column store and, in addition, use clustered indexes on shipdate and orderdate. We use the read uncommitted isolation level and disable parallelized query execution to level the playing field. The results are shown in Figure 13. For most of the queries, SMCs exhibit better query performance. For join-heavy queries, they benefit from using references to perform joins instead of explicit value-based join operations. In other queries the database benefits from the indexes on shipdate and orderdate.
### 8. RELATED WORK
Type-safe manual memory management is at the core of SMCs. Region-based memory management [16] groups objects at region granularity is too high a storage overhead as objects in the applications we target are long-lived with only incremental insertions and deletions. Memory safety at object granularity is enforced by introducing specialized pointer types, e.g., smart pointers in C++11, which use reference counting to ensure that memory is only freed once it is no longer referenced. Reference counting comes at a high cost, especially when objects may be accessed concurrently [11]. Fat pointers are frequently used for type and/or memory safety at run-time [1]. Tracking object incarnations [6] is an application of this approach.
We use a variant of epoch-based memory reclamation used in lock-free data structures [5, 7], to ensure thread-safety. Hazard pointers and their variants [8, 11] ensure that threads only reclaim memory that is not referenced by other such pointers. This is similar to our epoch-based approach, but it would reduce performance: each query would iterate over objects through a hazard pointer, requiring a memory barrier whenever it is assigned to the next object. Epochs amortize the cost of memory barriers by using the entire query as the granularity of the critical section. Braginsky and Petrank [3] propose a lock-free sorted linked list optimized for spatial locality. Each list element is a sub-list of several data elements stored as a chunk of memory. Hazard pointers ensure safe memory reclamation, while a freeze bit in the elements’ next pointer ensures lock-free splitting and merging of chunks. The implementation is limited to a specific format for each list element (integer key and value).
To improve query performance, SMCS rely on query compilation [9, 10, 14, 15]. We use popular techniques, e.g., maximizing the processing performed in each loop and merging query operations inside a loop to maximize data reuse [14]. Klonatos et al. [9] propose the use of a high-level programming language for implementation and use query compilation for query processing. In contrast to our approach, the data store and query processor are not integrated with the application and, hence, the database functionality is treated as a black box (e.g., there are no references to data objects). Murray et al. [12] first proposed query compilation for LINQ queries on in-memory objects. Their code generation approach did not go beyond querying C# objects in managed collections using compiled C# code. Nagel et al. [13] extended that idea by experimenting with different data layouts and identified managed collections as a performance bottleneck; generating native C code that operates on arrays of in-place structs provided the best performance. Our work builds on these findings.
DryadLINQ [17] and Trill [4] both build on LINQ to ease programming and to provide better application integration. DryadLINQ transforms LINQ programs into distributed computations running on a cluster whereas Trill operates on data batches pushed from external sources.
9. CONCLUSION
In this paper we introduced self-managed collections, a new type of collection for managed applications that manage and process large volumes of in-memory data. SMCS have specialized semantics that allow the collection to manually manage the memory space of its contained objects; and the objects of the collection to be referenced from the application and other SMCS. SMCS are optimized for query processing using language-integrated queries compiled to imperative code. We introduced the type-safe manual memory management system of SMCS and then the collection type itself. Our evaluation shows that SMCS outperform managed collections on query performance, batch allocations, and online modifications using predicate-based removal. At the same time, SMCS can improve the response time of the application overall by reducing the stress on the garbage collector and allow it to better scale with growing data volumes. Such scalability is transparent to the developer and eliminates the current required practise of resorting to low-level programming techniques.
10. REFERENCES
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/32829102/paper_183.pdf", "len_cl100k_base": 11911, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 39603, "total-output-tokens": 13361, "length": "2e13", "weborganizer": {"__label__adult": 0.00027441978454589844, "__label__art_design": 0.0002493858337402344, "__label__crime_law": 0.0002543926239013672, "__label__education_jobs": 0.000362396240234375, "__label__entertainment": 5.435943603515625e-05, "__label__fashion_beauty": 0.00014543533325195312, "__label__finance_business": 0.0002620220184326172, "__label__food_dining": 0.00025582313537597656, "__label__games": 0.0006046295166015625, "__label__hardware": 0.0017576217651367188, "__label__health": 0.0002918243408203125, "__label__history": 0.00020420551300048828, "__label__home_hobbies": 9.071826934814452e-05, "__label__industrial": 0.0003597736358642578, "__label__literature": 0.00015723705291748047, "__label__politics": 0.0002187490463256836, "__label__religion": 0.0003230571746826172, "__label__science_tech": 0.0227203369140625, "__label__social_life": 4.7147274017333984e-05, "__label__software": 0.00799560546875, "__label__software_dev": 0.96240234375, "__label__sports_fitness": 0.00021076202392578125, "__label__transportation": 0.0004248619079589844, "__label__travel": 0.0001659393310546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61215, 0.00949]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61215, 0.32743]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61215, 0.9121]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4646, false], [4646, 11574, null], [11574, 17706, null], [17706, 23924, null], [23924, 30751, null], [30751, 34570, null], [34570, 40085, null], [40085, 45872, null], [45872, 50915, null], [50915, 55661, null], [55661, 61215, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4646, true], [4646, 11574, null], [11574, 17706, null], [17706, 23924, null], [23924, 30751, null], [30751, 34570, null], [34570, 40085, null], [40085, 45872, null], [45872, 50915, null], [50915, 55661, null], [55661, 61215, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61215, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61215, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61215, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61215, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61215, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61215, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61215, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61215, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61215, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61215, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4646, 2], [4646, 11574, 3], [11574, 17706, 4], [17706, 23924, 5], [23924, 30751, 6], [30751, 34570, 7], [34570, 40085, 8], [40085, 45872, 9], [45872, 50915, 10], [50915, 55661, 11], [55661, 61215, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61215, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
a37c663d4daa989ef0714d80bf403a9e7bb83b6b
|
DOI
Link to record in KAR
http://kar.kent.ac.uk/13530/
Document Version
UNSPECIFIED
Copyright & reuse
Content in the Kent Academic Repository is made available for research purposes. Unless otherwise stated all content is protected by copyright and in the absence of an open licence (eg Creative Commons), permissions for further reuse of content should be sought from the publisher, author or other copyright holder.
Versions of research
The version in the Kent Academic Repository may differ from the final published version. Users are advised to check http://kar.kent.ac.uk for the status of the paper. Users should always cite the published version of record.
Enquiries
For any further enquiries regarding the licence status of this document, please contact:
researchsupport@kent.ac.uk
If you believe this document infringes copyright then please contact the KAR admin team with the take-down information provided at http://kar.kent.ac.uk/contact.html
Animating CSP\textsubscript{M} Using Action Semantics
by
Leonardo Freitas, Ana Cavalcanti, Hermano Moura
{ljsf,alcc,hermano}@cin.ufpe.br
Informatics Center Federal University of Pernambuco,
Av. Luis Freire s/n CDU, 50.740-540, Recife, Brazil, (81)3271-8430 R.4018
August 30, 2001
Abstract
CSP\textsubscript{M} is a language used to model concurrent and parallel computer systems formally. This paper presents an implementation of a significant part of the operational semantics of CSP\textsubscript{M} using action semantics. This work is a starting point for the development of a formal animator using action semantics engines, compilers, or interpreters like ABACO or ANI, and of a Java library that implements the CSP operators.
Keywords: Action semantics, CSP\textsubscript{M}, concurrency, animation.
1 Introduction
Communicating Sequential Processes (CSP) has been used to design concurrent systems based on a formal mathematical theory [17, 9]. Its main objective is to define the dynamic behaviour of concurrent processes. This is modeled using the concept of communication between processes. CSP\textsubscript{M} [18] is the machine readable version of CSP. It was implemented by tools like FDR [3] and PROBE [4], and extends CSP with a subset of a functional language that is used to construct auxiliary expressions and functions.
An Action Semantics description for the implementation of CSP\textsubscript{M} operators can be used as a starting point for the formal development of animators or library implementations of CSP\textsubscript{M}. There exist some libraries [16, 7] and an animator [4] for CSP\textsubscript{M}. The libraries, however, were not created using formal descriptions. Freitas [6] is building a Java library that implements CSP primitives. The description of CSP\textsubscript{M} presented here will be a guide for that work.
As mentioned in [11], the communicative facet of Action Notation makes available a basic notation to specify concurrency. Some important features like synchronization mechanisms and resource sharing must be implemented on the top of this basic notation. This work also provides some important primitive communicative actions, as an extension library, that can be used to model other languages.
Another contribution of this work is a case study on the use of the communicative facet. As far as we know, only the examples reported in [2, 12, 1] are available. The authors of [2] and [12] give insightful views of the use of Action Semantics to model concurrency, but they do not treat them generically nor they support processing of distributed agents; they rely on centralized agents.
In Section 2 we briefly describe Action Semantics, presenting examples of its notation. Next, in Section 3, we define CSP types, channels, processes, and operators. After that, in Section 4, we present a fragment of the full Action Semantics for CSP [5] discussing only the main contributions. In sequence, we present the abstract syntax of CSPM, some functional constructs, the processes representation, the animator protocol actions, and some primitive concurrent actions. Finally, in Section 5 we present the conclusions and an overview of future and related works.
2 Action Semantics
Action Semantics has many desirable properties that formalisms for specifying languages should have [21]. Action Notation [19, 11], the formal notation used in Action Semantics, provides control flow as well as data flow between actions. This allows us to specify programming language concepts like expressions, commands, declarations, concurrent agents, etc. There are five basic entities that comprise Action Notation: (i) Actions: entities that can be executed processing information, like a piece of a program; (ii) Yielders: expressions that can be evaluated during action execution; (iii) Sorts: define data types with some operational functions (Data Notation); (iv) Agents: entities that encapsulate the execution of actions, like a thread; and (v) Abstractions: a sort of data which encapsulates an action.
An action has facets that deals with particular modes of data flow. There are several facets: basic, functional, declarative, imperative, communicative, reflexive, directive, and hybrid; they are described in [11, 19]. Below we briefly detail those used in this paper.
- Basic facet: processes information independently (just control flow).
- Functional facet: data are called transients and are not available to the subsequent action.
- Declarative facet: an action may produce bindings that are visible to (scoped) sub actions(s) of the action.
- Imperative facet: an action may place data in storage cells, which may be inspected by subsequent actions. Storage is stable (visible to all actions).
- Communicative facet: an action may place a message (any kind of data including action abstractions) in the buffer of an action agent. The buffer is permanent, so it is concerned with the processing of communication between agents. All actions executed by an agent can access the buffer, but it is inaccessible to other agents.
There are, in Action Semantics, primitive actions and action combinators specifically built to deal with concurrency issues. The framework provides a bus for asynchronous message passing, asynchronous remote action invocation between action agents, and also a permanent buffer that can be inspected by any action of the performing agent. An agent acts like a lightweight process (or thread) of execution that behaves as a separate and independent running machine (CPU).
An action can be single-faceted (primitive action) or multi-faceted (composite action). The former only affects one kind of data (either transients, bindings, storage, or buffer);
the latter can compose each facet effect to produce complex and compounded actions. This composition is made by action combinators. There exist action combinators for each kind of the facet. Below we show some examples of actions.
- bind ‘x’ to 10: produces the binding of token “x” to the value 10.
- store true in cell1: stores the value true in the memory location cell1.
- send a message to agent1 containing the value bound to ‘‘x’’: sends a message to agent1 from the performing-agent containing the value bound to the token “x” (performing-agent is a predefined variable that represent “this” agent).
An Action Semantics description for a programming language is a unified algebraic specification divided in the following modules: (i) Abstract Syntax; (ii) Semantic Functions: describe the mapping from the abstract syntax tree (AST) of programs to their meaning, using Action Notation; and (iii) Semantic Entities: define the data types used by the language, and auxiliary sorts and combinators used by the description in the previous module. For a detailed description of Action Semantics see [11, 19].
3 Communicating Sequential Processes
Communicating Sequential Processes (CSP) can be viewed in two different ways: (i) a notation for describing concurrent systems; (ii) a mathematical theory to study processes which interact with each other and their environment by means of communications [17].
The most fundamental concept of CSP is a communication event. These events are assumed to be drawn from a set $\Sigma$ (EVENTS in CSP$_M$) which contains all possible communications for processes in the universe under consideration. A communication can be viewed as an indivisible transaction or synchronization between two or more processes. The fundamental assumptions about communications in CSP [17] are: (i) they are instantaneous; (ii) they only occur when all its participants (the processes and its environment) allow them.
In CSP$_M$ you can define types and channels. The channels are the wires that allow values to be communicated between processes. For example, considering the dining philosophers example [17], type and channels can be declared as follows:
```plaintext
name type FORKSPHILS = {0..2}.{0..2}
channel a : Int, channel b : FORKSPHILS
```
This allows channel $a$ to communicate any integer value and channel $b$ to communicate pairs: elements of the cartesian product $\{0 \ldots 2\} \times \{0 \ldots 2\}$ which is represented $\{0 \ldots 2\}.\{0 \ldots 2\}$. The input of a value $x$ through channel $a$ is written as $a?x$. Similarly, output of the fork 0 for the philosopher 1 through channel $b$ is written as $b!0.1$.
The main unit under consideration in CSP is a process. The alphabet of a process $P$ ($\alpha P$) is the set of all events this process can communicate. The “STOP” process represents a broken machine (i.e. a machine that was not able to communicate), and the “SKIP” process represents a successfully terminated process. Processes can be defined using the CSP$_M$ operators. Below we give a brief description of some of the CSP$_M$ operators.
- Prefix ($\to$) - Given an event $e$ in $\Sigma$, the process $e \to P$ is initially willing to communicate $e$, and then behaves like $P$.
3
• External Choice (\(\square\)) - The process \(P \square Q\) offers to the environment the opportunity to communicate the initials of either \(P\) or \(Q\). By initials we mean the set of events in \(\alpha P\) and \(\alpha Q\) that can be communicated immediately.
• Internal Choice (\(1^*1\)) - The process \(P 1^*1 Q\) does not offer to the environment any opportunity to choose any communication. It communicates the initials of either \(P\) or \(Q\) internally.
• Parallelism (\([1X]\)) - The process \(P [1X] Q\) executes \(P\) and \(Q\) in parallel, but they must synchronize on the events in the set \(X\), interleaving otherwise.
In the literature we can find many works describing CSP [8, 9, 17], and CSP\(_M\) [18, 3].
4 CSP\(_M\) Action Semantics
In this section we give the action definitions for CSP\(_M\) that are central to the construction of the animator. The complete work can be found in [5].
4.1 Abstract Syntax
Types in CSP\(_M\) can be either single or composite, where the composite types represents the cartesian product of single types. Types representing intervals of a discrete type may be defined as well. Since we are not covering all possible type we leave the definition open (with the symbol \(\square\)).
- \(\text{Type} = \llbracket\text{Type (""," Type")} \rrbracket | \llbracket\text{"{" Constant ..." Constant "}" } \rrbracket | \llbracket\text{"Int" } \llbracket\text{"Bool" } | \llbracket\"{" Identities "}" \rrbracket | \square\).
In CSP\(_M\) we can declare types to be used in channel declarations; channels to be used in processes declarations; and processes to describe the problem domain.
- \(\text{Declaration} = \llbracket\text{"nametype" Identifier "=" Type } \rrbracket \llbracket\text{"channel" Identifiers ("=" Type )} \rrbracket | \text{Process-Declaration } \square\).
A process declaration gives the name of the process and its definition: a process.
- \(\text{Process} = \text{Identifier } | \llbracket \"(" Expression & Process ")\" \rrbracket \llbracket\text{Identifier ("?" | ";" | ";" ) Expression "} \rrbracket \llbracket\text{Process ProcOp Process } \rrbracket | \square\).
In the definition of a process, we can refer to other processes, and use guards, prefixing, and the CSP operators presented in Section 3. Since CSP\(_M\) is a functional language we can use functional expressions in guards and communications.
The description of CSP\(_M\) involves well-known functional descriptions [20]. Other important semantic function definitions like channels, types, and expressions are omitted here (but can be found in [5]).
4.2 Process Representation
Here, CSP processes are represented using Action Notation agents [11]. Agents abstract a machine that executes actions, like a thread that runs in a CPU. We extend the default agents (user-agent), calling them process-agents, to have a process status associated with them, and define semantic functions to alter their status and create new agents. This status is used for synchronization purposes.
More specifically, a process is represented by a tuple.
\[(1) \text{process} = (\text{process-agent}^2, \text{process}^2, \text{cell}^2).\]
It contains two agents: the executor and the environment. The former executes the CSP operators, and the latter interacts with it.
Respectively, the process tuple contains two other processes: the left and the right operands. The operational execution of the parallelism uses this structure. In a process \( R = P [\{ a \}] Q \), the left and right of \( R \) are \( P \) and \( Q \), respectively, and the environment of \( P \) and \( Q \) is \( R \). In this way we link the processes and can abstract the synchronization mechanism.
More details for sequential execution can be found in Section 4.5. For this, the left and right processes have the special value unknown. For the topmost process, the environment is a special kind of agent presented in Section 4.4. The environment agent field is used to link the process network.
The tuple also has two cells that abstracts the process representation: the first records the process LTS (labeled transition system) [10] and the second its walk history. The main motivation to represent a process as an LTS, instead of an action, is the work in [18]. This is an operational semantics for CSPM that defines the behaviour of processes as an LTS. We define semantic functions that create a process and access some of its fields.
4.3 Primitive Communicative Actions
Below we describe some primitive communicative actions used in the synchronization protocol. These actions are used in our specification, but they are sufficiently generic to become an extension for the Action Notation communicative facet.
The action wait\forall \text{[on_] } \text{agent} is used for synchronization of the current (performing) agent. First it uses the put \text{[in _status]} action to set the status of the current agent to WAIT. Next it uses the receive primitive action that blocks until the expected message for the current agent arrives in the buffer. Finally, it uses the patiently check action to wait for the sending agent to set the status of the current agent to ACTIVE, and returns the received message. We use status flags, like WAIT and ACTIVE, to have a detailed control over synchronization.
- wait \text{[for _][on _]} :: \text{yielder[of process-agent], yielder[of event\textsuperscript{+}]} →
\text{action[ giving contents of a message | completing | diverging | communicating] [using current buffer] (total, restricted).}
\[(1) \text{wait [for } x \text{][on } s \text{]} =
\begin{cases}
\text{put the performing-agent [in WAIT status]}
\\
\text{and}
\\
\text{receive a message[from } x \text{][containing } s \text{]}
\end{cases}
\text{then}
\]
patiently check (the status of the performing-agent is ACTIVE)
and then
give the contents of the given message.
In sequence we have an action that offers a set of events s to be performed by an agent p. This action has the precondition that the receiving agent is not active: its status is WAIT.
- offer _ [to _, y :: yielder[of event^], yielder[of process-agent] →
action[communicating | diverging][using current buffer] (total, restricted).
(1) offer s [to p] =
| patiently check (the status of p is WAIT)
then
| send a message [to p][containing s].
We also have two actions used to synchronize two agents (the performing and the environment agents), with respect to a given synchronization set. The first one is used in the main flow of events and the other to handle the special events √, yielded by the “SKIP” process, and ⊗, that represents an internal event to be performed independently of the environment agent (i.e. internal choices). Their definition can be found in [5].
The three actions bellow assume that the process representation is well constructed. They generalize the process representation structure and companion operations. To define concrete descriptions we need to fully specify these actions. Here we give their headers.
The behaviour of alphabet[of process p] is to return the initials of a process: the set of the all events initially communicable by a process, including ⊗ and √.
- alphabet[of _] :: yielder[of process] →
action[giving set of event | completing] (total, restricted).
The behaviour of fire[event e][of process p] is to adjust the underlying representation of the process returning the next step in the process representation. These can be, for instance, the next node of the LTS graph.
- fire _ [of _] :: yielder[of event], yielder[of process] →
action[giving cell | completing | failing].
The behaviour of chooses[in s event ^][of process p] represents the environment agent of the process p, choosing an event inside the given event set s.
- chooses[in _][of _] :: yielder[of event^], yielder[of process] →
action[giving an event | failing | completing] (total).
The actions alphabet[of process p] and fire[event e][of process p] correspond to the initial and after semantic functions in the denotational semantics presented in [18].
4.4 Initialization Actions
The action presented in this section starts the execution of a process \( p \) considering the CSP sequential (prefix, external and internal choices, and recursion) and parallel operators (generalized parallelism and interleaving). It establishes the status precondition for the agents that perform each action.
(1) \( \text{start-user-environment with } p = \)
\[
\begin{align*}
\text{check both (the environment-agent of } p \text{ is the performing-agent, } & \text{ the process-agent of } p \text{ is the contracting-agent )} \\
\text{thence}
\end{align*}
\]
\[
\begin{align*}
\text{subordinate the process-agent of } p \text{ and put the performing-agent [in ACTIVE status]} & \text{ then} \\
\end{align*}
\]
\[
\begin{align*}
\text{send a message [to the given agent][containing function of choose-action of } p \text{] } & \text{ and then} \\
\end{align*}
\]
\[
\begin{align*}
\text{give } & \text{ then} \\
\end{align*}
\]
\[
\begin{align*}
\text{unfolding } & \text{ select-an-event[from } p\text{][on the given event}^+\text{]} \text{ then} \\
\end{align*}
\]
\[
\begin{align*}
\text{get-ack-and-wait[from the process-agent of } p\text{][on the given event] } & \text{ then} \\
\end{align*}
\]
\[
\begin{align*}
\text{unfold. } & \text{ }
\end{align*}
\]
To simplify the explanation of the next action we number its important points and refer to these numbers in the text. This action first checks \(^1\) that the environment agent [of \( p \)] is running (it is the performing agent) and that its process agent is the one with which it is interacting (the contracting agent). This guarantees that the environment agent controls the process agent. Next, the subordinate action is used to activate the process agent \(^2\). Afterwards, the ACTIVE status of the environment agent is recorded. In the sequence, it sends a message to the given agent \(^3\) to choose one of the possible CSP operator actions (CSP\(_{\text{basic protocol}}\) [for \( p \)] or CSP\(_{\text{parallel protocol}}\) [for \( p \)][on \( e \)]) to execute according with the structure of the process \( p \) (the basic protocol is called if \( p \) has the left and right process set to unknown; otherwise parallel is called). This action also sets the agent status to satisfy the protocol actions preconditions. The behaviour of the protocol actions is give events to be selected by the user environment \(^6\), so it regives the received events \(^4\). In the unfolding part \(^5\), the action select-an-event[from \( p \)][on \( e^+ \)] captures the user environment selecting an element \(^5\). Next, the action get-ack-and-wait[from \( p \)][on \( e \)] waits for the set of initially communicable events of \( p \) \(^7\), an element of which is selected by the action select-an-event[from \( p \)][on \( e^+ \)] \(^6\). This goes on \(^8\) until the TERMINATE status flag is set for the process agent [of \( p \)], by the actions in the next section. In this case select-an-event[from \( p \)][on \( e^+ \)] terminates. Note that the action start-user-environment executes in its own agent, so it plays the role of the topmost environment for the process network.
4.5 Actions to Animate CSP\textsubscript{M} Operators
Here we define an action to animate the prefix, guarded recursion, external and internal choice CSP operators. We have only two actions to represent the CSP\textsubscript{M} operators behaviour: one for the sequential (CSP\textsubscript{basic protocol }[for \ p]) and another for the parallel operators (CSP\textsubscript{parallel protocol }[for \ p][on \ s]). These two actions animate a CSP process by controlling the execution flow of the agents. The control is based on the actions alphabet [of \ p], fire[\ p], and chooses [in \ s][of \ p], which, as explained in the previous section, capture the operational semantics in [18]. There exists a precondition for these actions to function properly: the process-agent and the environment-agent status of the given process must be WAIT and ACTIVE respectively, in order to avoid the live lock.
Firstly, CSP\textsubscript{basic protocol }[for \ p] synchronously puts the process-agent and the environment-agent of the given process in ACTIVE and WAIT(\Sigma) status, respectively.\footnote{1}
\begin{itemize}
\item CSP\textsubscript{basic protocol }[for \ p] :: yielder[of\ process] \rightarrow action[giving\ event'] \ |
\begin{itemize}
\item communicating \hspace{1em} \item completing \hspace{1em} \item failing \hspace{1em} \item diverging
\end{itemize}
\begin{itemize}
\item using\ current\ bindings \hspace{1em} \item current\ storage \hspace{1em} \item current\ buffer\ \ (total).
\end{itemize}
\end{itemize}
\begin{enumerate}
\item CSP\textsubscript{basic protocol }[for \ p] = \synch\ states\ from\ the\ environment-agent\ of\ \ p\ [with\ the\ process-agent\ of\ \ p] \footnote{1}
\begin{itemize}
\item containing elements of the set bound to \Sigma
\end{itemize}
\begin{itemize}
\item unfolding
\end{itemize}
\end{enumerate}
Then it builds the set of events that \ p can communicate including \(\tau\) and \(\sqrt{}\). After that the action checks if the process is deadlocked (the set ATS of the process alphabet - its initials - is empty) \footnote{3} and treats that situation synchronously putting the process and environment agents in TERMINATED and ACTIVE status, respectively.\footnote{4} It treats \(\sqrt{}\) and \(\tau^8\) in much the same way. For the former, finishing actions are fired \footnote{6} and the processes silently dies \footnote{7}. For the latter, invisible actions are fired \footnote{9} and the action continues\footnote{10} (unfolds). For simplicity, we are prioritizing the selection of \(\tau\) and \(\sqrt{}\).
\begin{enumerate}
\item bind “ATS” to the alphabet [of \ p] \footnote{2}
\begin{itemize}
\item check 3 and treat deadlock 4 . . .
\item or
\item check 5, execute 6 and treat termination 7 . . .
\item or
\item check 8, execute 9 and treat invisible actions 10 . . .
\end{itemize}
\end{enumerate}
The action checks if the process has any event in the “ATS” set to execute \footnote{11}. It offers the possible communication set to the environment and waits for a response to select any given ATS events \footnote{12}. The environment must also be waiting for this to happen (barrier granted in \footnote{1}).
Then we need to adjust the process representation according to the chosen ATS event and regive the chosen event \footnote{13}. This extracts the selected given event from the communication set (initials of the process). After that, we need to send the same given event to
the environment and synchronize the status (i.e. activate the environment and make the process agent wait on it for any event \(^{14}\)). Finally, we restart the action after the process and the environment have been resynchronized.\(^{15}\)
\[
\begin{align*}
& \quad \text{check( not ( either( either( the given event, } \tau \text{ ), } \sqrt { } ) ) ) } \quad ^{11} \\
& \quad \text{thence} \\
& \quad \text{offer the set bound to ”ATS” [to the environment-agent of } p \text{]} \quad ^{12} \\
& \quad \text{then} \\
& \quad \text{synch_events[from the process-agent of } p [\text{with the environment-agent of } p] \\
& \quad \quad \text{containing the elements of the set bound to ”ATS”] } \\
& \quad \text{thence} \\
& \quad \text{fire [the given event][of } p \text{]} \quad ^{13} \\
& \quad \text{and} \\
& \quad \text{the process-agent of } p \text{ asks[the given event][to the environment-agent of } e \text{]} \quad ^{14} \\
& \quad \quad \text{waiting on the elements of the set bound to } \Sigma \\
& \quad \text{then} \\
& \quad \text{unfold}. \quad ^{15}
\end{align*}
\]
The action for the generalized parallelism and interleaving (\texttt{CSP\_parallel\_protocol [for } p \text{][on } s \text{]}) is presented below. Since its header is similar to the action above we omit it here. That action has an additional precondition: the left and right processes must be different of unknown.
Firstly, the action needs to ensure that the environment agent of the left process is the process agent of \( p \) and the same for the right process \(^{1}\); this also checks for unknown processes. This check \(^{1}\) is needed because the agent performing this action plays the role of the environment for the left and right processes; the process agent of \( p \) receives the messages of left or right process agents as their environment agent \(^{4}\). Note that the environment agent of the process \( p \) can be either other sequential process or the topmost user environment.
As in the previous action, it needs to synchronously put the process-agent and the environment agent of the given process in ACTIVE and WAIT(\( \Sigma \)) status, respectively.\(^{2}\) Here we want to define a kind of forking inside the process \( p \).\(^{3} \) Note that there is not a start order of the process \(^{3}\), so we use the interleave property of the “and” action combinator to capture this [11].
\(\texttt{CSP\_parallel\_protocol [for } p \text{][on } s \text{]} = \)
\[
\begin{align*}
& \quad \text{check( both( the environment-agent of the left-process of } p \text{ is } \quad ^{1} \\
& \quad \quad \text{the process-agent of } p \text{ ) } ) } \\
& \quad \text{and} \\
& \quad \text{check( both( the environment-agent of the right-process of } p \text{ } \quad ^{1} \\
& \quad \quad \text{is the process-agent of } p \text{ ) ) } \\
& \quad \text{and then}
\end{align*}
\]
\(\texttt{CSP\_parallel\_protocol [for } p \text{][on } s \text{]} = \)
\[
\begin{align*}
& \quad \text{check( both( the environment-agent of the left-process of } p \text{ is } \quad ^{1} \\
& \quad \quad \text{the process-agent of } p \text{ ) } ) } \\
& \quad \text{and} \\
& \quad \text{check( both( the environment-agent of the right-process of } p \text{ } \quad ^{1} \\
& \quad \quad \text{is the process-agent of } p \text{ ) ) } \\
& \quad \text{and then}
\end{align*}
\]
\[
\text{synch\_states} \quad [\text{from the environment-agent of } p] [\text{with the process-agent of } p]^2 \\
\quad \text{containing elements of the set bound to } \Sigma \\
\text{then} \\
\quad \text{start-user-environment with the left of } p^3 \\
\quad \text{and} \\
\quad \text{start-user-environment with the right of } p^3 \\
\text{then}
\]
The action recursively builds the set “C” of possible communicable events. The auxiliary action \text{build-C-set}(A, B, s) applies the step-law for the CSP parallel operator \( C = (A \cap B \cap X) \cup (A \setminus X) \cup (B \setminus X) \), where \( A, B \) are the initials of \( P \) and \( Q \), of \( P \ [1\times 1] Q \), and \( X \) is the given synchronization set \( s \) \cite{17}.
Next, the action checks if the process is deadlocked (“C” is empty) and treats that situation synchronously putting the process and environment agents in TERMINATED and ACTIVE status, respectively. Note that it must inform the participants (left and right processes) that this situation has been reached.
\[
\text{unfolding}^4 \\
\quad \text{bind “C” to build-C-set (the alphabet[left of } p], }^5 \\
\quad \quad \text{the alphabet[right of } p, s \) \\
\quad \text{thence} \\
\quad \quad \text{check }^6 \text{ and treat deadlock }^7 \ldots \\
\text{thence}
\]
After that, the action concurrently waits for the acks of its participant processes, inside the elements of “C”. The acknowledged events will be the events chosen by either process (left or right) \( 8^8 \). We have an interleaving because of the “and” property and not an agent based (CPU) concurrency.
\[
\text{respectively get-ack-and-wait[from the process-agent of the right of } p]^8 \\
\quad \text{[on the elements of set bound to “C”]} \\
\text{and}^9 \\
\quad \text{respectively get-ack-and-wait[from the process-agent of the left of } p]^8 \\
\quad \text{[on the elements of set bound to “C”]} \\
\text{thence}
\]
In this case, they can interleave if that events are outside the synchronization set \( s \), or need to achieve synchronization via barrier. In sequence, the event is selected for execution, and the process continues to execute.
\[
\quad \text{the given event } \in s \quad 10 \quad \text{or interleave (F)}^12 \\
\text{then} \\
\quad \text{select-an-event[from } p][\text{on the given event}^+13 \\
\text{thence} \\
\quad \text{unfold}.^14
\]
These actions give an operational view of the definitions of CSP operators and the step law of the parallel operator \cite{17}.
10
5 Conclusions and Related Works
In this work we have used Action Semantics to define an operational semantics of CSP\textsubscript{M} [18] more legibly. We also made extensive use of the communicative facet of action notation extending it with some new primitive actions for synchronization, hand shaking, and communication. Tools like ANI [14] or ABACO [15] will be used in a possible future work to run and check our semantic description in order to have a formal animator implementation of the basic CSP\textsubscript{M} operators. Since action notation has an underlying operational semantics, it can be interesting as a future work to compare this operational view of the action notation against the CSP\textsubscript{M} operational semantics described in [18], with this we can guarantee that the same behaviour were defined in each description (i.e. our action semantics description of CSP\textsubscript{M} against the operational semantics of [18]). Basically, the relationship between raw operational semantics of CSP\textsubscript{M} [18], and our description is the readability, and modularity. It is also a tentative of a more concrete implementation of the CSP\textsubscript{M} execution behaviour.
The work in [2] uses the communicative facet to describe distributed network protocols (SNMPv3). The concurrent primitives of a functional language (ML) is presented in [12]. As far as we know, there is only one work which uses action semantics to describe CSP [1], which is based on informal descriptions of the CSP dialect originally defined in Hoare’s seminal paper [8]. Due to the lack of works in action notation that uses communicative facet, it was difficult to make a comparison against other works since there are no wide spread available action notation interpreters that run communicative actions.
The work in [2] inspired us to define an Action Semantics for CSP; the concurrent ML description of [12] inspired us to build a decentralized and generalized version of the communicative actions. This made the construction of a communicative facet extension framework for Action Semantics easier. This is an important part of our future work.
Some concepts were not contemplatated in our work, like event hiding and renaming, replicated operators, and data type definition. We also do not consider the whole type expressiveness of CSP\textsubscript{M} as noted in [3]. Due to the lack of space we cannot explain in more detail some of the actions and also the action notation structure, but a comprehensive work in this field can be found in [11] and [5].
6 Acknowledgments
This work is partially supported by CNPq, the Brazilian Research Agency. We would like to acknowledge Peter Mosses for his effort in obtaining out of print references on Action Semantics of CSP and for many valuable discussions. The idea of using an LTS to first give an action semantics to CSP was first pursued by Alexandre Mota [13]; we thank him for discussing his work with us.
References
|
{"Source-Url": "https://kar.kent.ac.uk/13530/1/animating_cspm_freitas.pdf", "len_cl100k_base": 8220, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 61319, "total-output-tokens": 10273, "length": "2e13", "weborganizer": {"__label__adult": 0.0003509521484375, "__label__art_design": 0.00043892860412597656, "__label__crime_law": 0.0003368854522705078, "__label__education_jobs": 0.00086212158203125, "__label__entertainment": 9.715557098388672e-05, "__label__fashion_beauty": 0.000152587890625, "__label__finance_business": 0.0002551078796386719, "__label__food_dining": 0.000362396240234375, "__label__games": 0.00060272216796875, "__label__hardware": 0.0007658004760742188, "__label__health": 0.0005631446838378906, "__label__history": 0.0002930164337158203, "__label__home_hobbies": 0.00010544061660766602, "__label__industrial": 0.0004756450653076172, "__label__literature": 0.0003893375396728515, "__label__politics": 0.00029397010803222656, "__label__religion": 0.0005426406860351562, "__label__science_tech": 0.0426025390625, "__label__social_life": 0.00011157989501953124, "__label__software": 0.0060882568359375, "__label__software_dev": 0.943359375, "__label__sports_fitness": 0.0003230571746826172, "__label__transportation": 0.0006403923034667969, "__label__travel": 0.0002206563949584961}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37011, 0.02188]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37011, 0.44394]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37011, 0.8396]], "google_gemma-3-12b-it_contains_pii": [[0, 1122, false], [1122, 3664, null], [3664, 7144, null], [7144, 10399, null], [10399, 13001, null], [13001, 16202, null], [16202, 18511, null], [18511, 21700, null], [21700, 25224, null], [25224, 28592, null], [28592, 31111, null], [31111, 34514, null], [34514, 37011, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1122, true], [1122, 3664, null], [3664, 7144, null], [7144, 10399, null], [10399, 13001, null], [13001, 16202, null], [16202, 18511, null], [18511, 21700, null], [21700, 25224, null], [25224, 28592, null], [28592, 31111, null], [31111, 34514, null], [34514, 37011, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37011, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37011, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37011, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37011, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37011, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37011, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37011, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37011, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37011, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37011, null]], "pdf_page_numbers": [[0, 1122, 1], [1122, 3664, 2], [3664, 7144, 3], [7144, 10399, 4], [10399, 13001, 5], [13001, 16202, 6], [16202, 18511, 7], [18511, 21700, 8], [21700, 25224, 9], [25224, 28592, 10], [28592, 31111, 11], [31111, 34514, 12], [34514, 37011, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37011, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
e7efc67ae4473947ce540db06264f332ddd670a1
|
[REMOVED]
|
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-642-21064-8_22.pdf", "len_cl100k_base": 8774, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 47471, "total-output-tokens": 11018, "length": "2e13", "weborganizer": {"__label__adult": 0.0003180503845214844, "__label__art_design": 0.0002008676528930664, "__label__crime_law": 0.00030422210693359375, "__label__education_jobs": 0.000324249267578125, "__label__entertainment": 4.404783248901367e-05, "__label__fashion_beauty": 0.00012576580047607422, "__label__finance_business": 0.00015854835510253906, "__label__food_dining": 0.00027179718017578125, "__label__games": 0.0003786087036132813, "__label__hardware": 0.000537872314453125, "__label__health": 0.0003948211669921875, "__label__history": 0.00014483928680419922, "__label__home_hobbies": 5.811452865600586e-05, "__label__industrial": 0.00028586387634277344, "__label__literature": 0.0002446174621582031, "__label__politics": 0.00024330615997314453, "__label__religion": 0.00040793418884277344, "__label__science_tech": 0.006549835205078125, "__label__social_life": 6.401538848876953e-05, "__label__software": 0.0047454833984375, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.00021958351135253904, "__label__transportation": 0.0003628730773925781, "__label__travel": 0.0001596212387084961}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44365, 0.0286]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44365, 0.67884]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44365, 0.84954]], "google_gemma-3-12b-it_contains_pii": [[0, 2094, false], [2094, 5217, null], [5217, 8481, null], [8481, 11344, null], [11344, 14265, null], [14265, 17070, null], [17070, 20225, null], [20225, 23294, null], [23294, 26289, null], [26289, 29582, null], [29582, 32187, null], [32187, 35473, null], [35473, 38528, null], [38528, 41627, null], [41627, 44365, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2094, true], [2094, 5217, null], [5217, 8481, null], [8481, 11344, null], [11344, 14265, null], [14265, 17070, null], [17070, 20225, null], [20225, 23294, null], [23294, 26289, null], [26289, 29582, null], [29582, 32187, null], [32187, 35473, null], [35473, 38528, null], [38528, 41627, null], [41627, 44365, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44365, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44365, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44365, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44365, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44365, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44365, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44365, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44365, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44365, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44365, null]], "pdf_page_numbers": [[0, 2094, 1], [2094, 5217, 2], [5217, 8481, 3], [8481, 11344, 4], [11344, 14265, 5], [14265, 17070, 6], [17070, 20225, 7], [20225, 23294, 8], [23294, 26289, 9], [26289, 29582, 10], [29582, 32187, 11], [32187, 35473, 12], [35473, 38528, 13], [38528, 41627, 14], [41627, 44365, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44365, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
75fbaa6d023f9700c0a170351540f247a1c1672e
|
[REMOVED]
|
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-540-39650-5_15.pdf", "len_cl100k_base": 14849, "olmocr-version": "0.1.49", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 56347, "total-output-tokens": 16649, "length": "2e13", "weborganizer": {"__label__adult": 0.0004320144653320313, "__label__art_design": 0.0004742145538330078, "__label__crime_law": 0.0014486312866210938, "__label__education_jobs": 0.0006608963012695312, "__label__entertainment": 0.00013506412506103516, "__label__fashion_beauty": 0.0002071857452392578, "__label__finance_business": 0.0005283355712890625, "__label__food_dining": 0.0004534721374511719, "__label__games": 0.0010461807250976562, "__label__hardware": 0.002170562744140625, "__label__health": 0.0007963180541992188, "__label__history": 0.0005164146423339844, "__label__home_hobbies": 0.00016176700592041016, "__label__industrial": 0.0009760856628417968, "__label__literature": 0.0003752708435058594, "__label__politics": 0.0006432533264160156, "__label__religion": 0.0006461143493652344, "__label__science_tech": 0.396484375, "__label__social_life": 0.0001342296600341797, "__label__software": 0.02044677734375, "__label__software_dev": 0.56982421875, "__label__sports_fitness": 0.0003921985626220703, "__label__transportation": 0.0007495880126953125, "__label__travel": 0.0002340078353881836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60627, 0.02346]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60627, 0.41818]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60627, 0.86363]], "google_gemma-3-12b-it_contains_pii": [[0, 2521, false], [2521, 5839, null], [5839, 9097, null], [9097, 12394, null], [12394, 15413, null], [15413, 18935, null], [18935, 22153, null], [22153, 25725, null], [25725, 28185, null], [28185, 32241, null], [32241, 36104, null], [36104, 39430, null], [39430, 43012, null], [43012, 46038, null], [46038, 51208, null], [51208, 53864, null], [53864, 57109, null], [57109, 60627, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2521, true], [2521, 5839, null], [5839, 9097, null], [9097, 12394, null], [12394, 15413, null], [15413, 18935, null], [18935, 22153, null], [22153, 25725, null], [25725, 28185, null], [28185, 32241, null], [32241, 36104, null], [36104, 39430, null], [39430, 43012, null], [43012, 46038, null], [46038, 51208, null], [51208, 53864, null], [53864, 57109, null], [57109, 60627, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60627, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60627, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60627, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60627, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60627, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60627, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60627, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60627, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60627, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60627, null]], "pdf_page_numbers": [[0, 2521, 1], [2521, 5839, 2], [5839, 9097, 3], [9097, 12394, 4], [12394, 15413, 5], [15413, 18935, 6], [18935, 22153, 7], [22153, 25725, 8], [25725, 28185, 9], [28185, 32241, 10], [32241, 36104, 11], [36104, 39430, 12], [39430, 43012, 13], [43012, 46038, 14], [46038, 51208, 15], [51208, 53864, 16], [53864, 57109, 17], [57109, 60627, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60627, 0.11694]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
22930faf47f740667dbf03b1439ad3c7a6a94684
|
FADE: A Programmable Filtering Accelerator for Instruction-Grain Monitoring
Sotiria Fytraki†, Evangelos Vlachos‡, Onur Koçberber†, Babak Falsafi†, Boris Grot*
†EcoCloud, EPFL‡Oracle Labs*University of Edinburgh
Abstract
Instruction-grain monitoring is a powerful approach that enables a wide spectrum of bug-finding tools. As existing software approaches incur prohibitive runtime overhead, researchers have focused on hardware support for instruction-grain monitoring. A recurring theme in recent work is the use of hardware-assisted filtering so as to elide costly software analysis.
This work generalizes and extends prior point solutions into a programmable filtering accelerator affording vast flexibility and at-speed event filtering. The pipelined microarchitecture of the accelerator affords a peak filtering rate of one application event per cycle, which suffices to keep up with an aggressive OoO core running the monitored application. A unique feature of the proposed design is the ability to dynamically resolve dependencies between unfilterable events and subsequent events, eliminating data-dependent stalls and maximizing accelerator’s performance. Our evaluation results show a monitoring slowdown of just 1.2-1.8x across a diverse set of monitoring tools.
1. Introduction
Software robustness poses a key challenge to application developers as modern systems become increasingly complex, leading to more bug-prone software [22]. Because bugs and security vulnerabilities proliferate, programmer’s productivity suffers and security breaches intensify, eventually resulting in catastrophic system failures. Dynamic instruction-grain monitoring is a powerful approach to improve software robustness: by monitoring individual program instructions at runtime [21], dynamic instruction-grain monitoring allows for detection of erroneous behavior, such as bugs [12] and security vulnerabilities [17].
Software dynamic instruction-grain monitoring tools afford high flexibility, but they slow down program execution by up to two orders of magnitude [16]. The high slowdown is due to the numerous monitoring actions taken per application instruction (e.g., perform correctness checks and bookkeeping). As high runtime overhead limits opportunities for deployment, prior work considered trading off flexibility for performance through custom hardware targeting specific monitoring functionality [5, 7, 8, 23]. While both architectural details and targeted monitors vary widely among the various proposals, many tend to employ some form of filtering as a task-specific approach for reducing the monitoring load. For instance, HardBound filters out non-pointer application data by accessing and checking the relevant metadata with custom hardware [7]; FlexiTaint employs rule-based filtering to determine whether taint propagation can be performed in dedicated logic avoiding software analysis [23].
The key contribution of this paper is in developing a general filtering accelerator to support a broad range of monitoring tasks with high filtering coverage and low hardware overhead. We generalize and extend earlier observations regarding filterable events by linking them to common application and monitoring activities, such as initializing a stack frame on a function call. We observe that unfilterable events, which require processing by the monitoring software, often contain dependencies on subsequent filterable events, thus lowering filtering efficiency due to stalls. In response, we develop monitor-agnostic architectural extensions that enable concurrent filtering and processing of unfilterable events. We also observe high non-uniformity in the arrival rate of filterable and non-filterable events alike. However, we demonstrate that shallow queues are sufficient to buffer the event bursts.
Using a suite of diverse monitors, we show that:
• The average monitoring load rarely exceeds one event per cycle, indicating that a single-issue filtering accelerator with a throughput of one event per cycle suffices.
• Instruction and stack-update events dominate the monitoring load. Instruction events require fine-grained accesses to monitor’s metadata, most of which can be filtered through (1) hardware-executed checks of meta-
‡This work was done while the author was at CMU.
*This work was done while the author was at EPFL.
data state against an invariant, and (2) detection and elimination of redundant updates that leave the metadata state unmodified. Stack-update events perform bulk metadata initialization in response to function calls and returns and can be efficiently handled with a simple state machine.
- Maintaining a high filtering rate requires that filtering takes place concurrently with the processing of unfiltered events, a task that is complicated due to dependencies between unfilterable and subsequent filterable events. To decouple filtering and the processing of the unfiltered events, we observe that there is only minimal state that is critical for deciding if a dependent event is filterable. We show that this state can be updated for unfilterable events directly in the accelerator with simple hardware extensions.
- Both filterable and unfilterable events arrive in bursts that must be buffered to reduce stalls due to backpressure. Shallow queues of 16 to 32 events are sufficient for this purpose and allow for decoupling of the filtering accelerator from the core running the application.
Building on the observations above, this paper develops an architecture, along with full microarchitectural support, for a flexible at-speed Filtering Accelerator for Decoupled Event processing, or FADE. Using full-system cycle-accurate simulation, we show that FADE is highly effective, filtering out 84-99% of events that would otherwise be handled in software, thereby reducing the application slowdown to only 1.2-1.8x (versus 1.6-7.4x for unaccelerated execution). In the 40nm technology, FADE requires 0.12mm² of area and 273mW of power at peak.
2. Motivation
Instruction-grain monitoring is a powerful analysis technique with the ability to observe the application actions in fine detail. The monitoring tools observe dynamic application events (e.g., instructions, function calls) to identify erroneous, anomalous, and otherwise interesting behaviors. In doing so, monitors check whether a predefined program invariant holds. Invariants may specify that every accessed memory location has been allocated, or that the value used as a jump target is not spurious. To assist analysis, monitors maintain bookkeeping information, or metadata, about the state of the application memory and registers. Based on the event, the relevant metadata are checked against the invariant and/or updated with a new value.
The powerful analysis enabled through instruction-grain monitoring comes with the downside of high performance overhead. Software-only schemes, such as Valgrind [16], provide flexibility. However, the flexibility comes at a steep performance penalty of up to two orders of magnitude [16], as for each application event, a software handler is dispatched and/or updates metadata. Several optimizations have been proposed to lower the runtime overhead through analysis-specific optimizations [9], sampling of application activity [10], and hot-path analysis [18], but they either incur considerable slowdown or are not widely applicable.
Filtering of application events that trigger monitoring activity has been proposed as a way to reduce the high runtime overhead. Prior work has identified the potential of filtering and has introduced hardware-based mechanisms to achieve high monitoring performance [2, 7, 19, 23]. However, prior work treats filtering as a trade-off between flexibility and performance. Filtering mechanisms that achieve high efficiency and low runtime overhead are focused on a narrow set of monitoring analyses (e.g., only taint flow analysis [23], or only memory safety analysis [7]). Filtering mechanisms that aim at high flexibility fail to considerably lower the slowdown [2]. Moreover, a number of existing filtering proposals either require intrusive modifications to the core microarchitecture (e.g., a new pipeline stage [23]) or have high resource overheads, needing a dedicated core for the monitoring task [2, 19].
This work makes the observation that filtering does not have to trade flexibility for performance, and can be effective at accelerating a wide range of monitoring tools. Furthermore, filtering can be independent of the underlying system and monitoring architecture while accommodating different design points in terms of the core microarchitecture and the execution substrate for processing of unfilterable events.
3. Design Considerations
Figure 1 shows the main entities involved in the event processing flow of a monitoring system with filtering support. The application generates events as instructions retire and enqueues the events of interest (i.e., monitored events) in the event queue. The rest of the events (i.e., unmonitored events) do not require further processing. The filtering accelerator (FA) dequeues events from the head of the event queue and checks whether the filtering condition is satisfied. If so, events are filtered and no further action is required. As further processing is necessary for
the rest of the events (i.e., unfiltered events), the filtering accelerator places them into the unfiltered event queue. Finally, the unfiltered event consumer dequeues and handles the unfiltered events completing the monitoring analysis.
3.1. Event Producer
As the application instructions retire, they generate events. However, monitoring analyses do not require all application events to be processed. As a result, software [16] and hardware [2, 6] monitoring frameworks include support to eliminate1 the unmonitored events. We define monitoring load as the ratio of monitored events to all committed instructions.
Based on the types of the monitored instruction events, monitoring analyses can be broadly categorized into two types: memory tracking, which process only memory instructions, and propagation tracking, which may track any instructions types and propagate a metadata value from the source operand(s) to the destination operand. The exact instruction types being monitored depend on the monitor’s task. For instance, MemLeak, which identifies memory leaks [13], monitors instructions that may propagate a pointer value, such as arithmetic and load/store instructions, but eliminates floating-point instructions.
To quantify the load on different monitors, we measure the applications’ monitored IPC on an aggressive 4-way OoO core (we detail the benchmarks and monitors in Section 6). In Figure 2(a), we present the per-monitor results averaged across benchmarks. For instance, for AddrCheck, the average application IPC (including both monitored and unmonitored instructions per cycle) is 1.1, out of which 0.4 (monitored instructions per cycle) require a monitoring action to be taken.
In general, the monitoring load of memory-tracking monitors is lower compared to the monitoring load of propagation-tracking monitors, because propagation-tracking monitors tend to process more events. As a result, the former have a low monitored IPC (up to 0.4 event per cycle), while the opposite holds for the latter (up to 0.68 event per cycle).
Figure 2(b), shows the per-benchmark results for AddrCheck, a memory tracking monitor, which checks whether an access goes to allocated memory [16]. For all benchmarks, the monitored IPC is significantly below 1.0, with an average of 0.24. In contrast, Figure 2(c) shows the per-benchmark results for MemLeak, a propagation tracking monitor. While most benchmarks also have a monitored IPC of below 1.0, with an average of 0.68, the monitored IPC of MemLeak is 2.8x higher than AddrCheck, underscoring the differences in monitoring load.
The monitored IPC indicates the event generation rate of the applications and dictates the rate at which events must be consumed by the filtering accelerator. The presented analysis shows that the monitored IPC is below 1.0 for a range of monitors, even when the event stream is produced by an aggressive OoO core. We thus conclude that a filtering accelerator with a processing capability of one event per cycle can keep up with the event producer.
3.2. Event Queue
We next examine the buffering requirements between the event producer and the filtering accelerator. For the purpose of our study, we assume a filtering accelerator that processes one event per cycle and has an infinite event queue. In Figure 3(a, b), we present the cumulative distribution of the event queue’s occupancy for (a) AddrCheck, a memory-tracking monitor, and (b) MemLeak, a propagation-tracking monitor, on an aggressive 4-way OoO core.
For memory-tracking monitors (Figure 3(a)), the monitored IPC is low, resulting in small bursts of events that can be captured in an 8-entry queue. For propagation-tracking monitors (Figure 3(b)), the monitored IPC is considerably higher, resulting in longer bursts. Depending on the benchmark’s monitored IPC, the queueing requirements range from 128 entries (mcf – low monitored IPC) to 8K entries (omnetpp – higher monitored IPC). For
---
1. The term filtering has been used in prior work [6] to refer to elimination of unmonitored events. We do not use the term filtering in this context because no monitoring task is associated with unmonitored events.
benchmarks with a monitored IPC greater than one, such as bzip, queueing cannot help, as the filtering rate (1.0 event per cycle) is below the event generation rate (1.2 events per cycle).
We next compare the performance loss stemming from finite queues over an infinite event queue. We evaluate two queue sizes: (1) 32K entries, which can accommodate the bursts based on our analysis, and (2) 32 entries, which is a practical-sized queue. In Figure 3(c), we present results for MemLeak, a monitor that exerts the greatest pressure on the queue due to its high monitored IPC. We observe that the 32K-entry queue can fully accommodate the bursts (resulting in no slowdown) for all benchmarks but bzip and gcc, corroborating the burstiness analysis in Figure 3(b). Meanwhile, a much smaller queue of only 32 entries results in a slowdown that ranges from none (mcf, astar, libq.), to 1.17x (gombk). Queueing cannot help with bzip (monitored IPC over 1.0) resulting in a 1.33x slowdown for a 32K-entry queue and a 1.36x slowdown for a 32-entry queue. For gcc, queueing reduces the slowdown from 1.1x (32-entry queue) to 1.04x (32K-entry queue).
We conclude that a small (e.g., 32-entry) event queue allows for insignificant slowdown caused by bursts.
3.3. Filtering Accelerator
The filtering accelerator aims at reducing the overhead of common monitoring activities, which mainly happen in response to two categories of application events: (1) instructions, (2) function calls and returns. The monitors also process high-level events (e.g., malloc, fopen, mmap). The filtering accelerator does not target high-level events, as they are infrequent and require complex handling.
The vast majority of monitoring activity is due to instruction events requiring accesses, checks, and updates to the metadata of the instruction operands. Nearly all remaining monitoring activity is due to function calls and returns. At each function call (return), a frame is allocated (deallocated) on the application stack. We refer to both types of activity as a stack update. Stack updates must be shadowed by the monitor to properly track which portion of the application memory has been allocated. Therefore, the monitor sets a region of metadata memory to a known value (e.g., allocated and uninitialized on a call, unallocated on a return).
Figure 4(a) breaks down the monitors’ execution time into instruction (classified into RU and CC, explained later) and stack-update handling. While instructions dominate the execution profile, in two out of five studied monitors stack updates consume up to 17% of the execution time and represent an attractive acceleration target.
3.4. Unfiltered Event Queue and Consumer
Events that cannot be handled by the filtering accelerator (i.e., unfiltered events) require further processing by the monitoring system. An ideal unfiltered event consumer should be able to support a wide variety of monitoring tools for comprehensive bug coverage. This requirement argues for a programmable substrate, such as a general-purpose core (e.g., LBA [2]) or a reconfigurable fabric [6].
Nearly all unfiltered events arise as a result of (1) memory allocation, deallocation, or initialization; and (2) traversals of tainted data structures or files in taint-tracking monitors. In general, these actions involve multiple memory words and, as a result, trigger a burst of metadata updates that cannot be filtered.
Figure 4(b) plots the distance, as a cumulative distribution, between unfiltered events for MemLeak. Results are similar for other monitors. The distance is measured in events. We observe that two unfiltered events are typically separated by up to 16 filterable events. Based on this analysis, we define an unfiltered burst as a sequence of unfiltered events, each of which is separated by at most 16 filterable events. Figure 4(c) shows the average burst size (measured in unfiltered events) for each monitor and benchmark pair. We observe that the bursts are small, with an average size of 16 or fewer unfiltered events for the majority of benchmarks and monitors. We thus conclude that a small (e.g., 16-entry) unfiltered event queue is effective at accommodating the bursts.
An important implication of our analysis is that because filterable events are interleaved between pairs of unfilterable event, it is essential to perform filtering concurrently with the processing of unfiltered events. However, inter-event dependencies mandate in-order event processing, forcing a naïve filtering accelerator design to stall when an unfiltered event is processed by the unfiltered event consumer.
3.5. Summary
Our study of a broad range of applications and monitors shows that the monitoring load rarely exceeds one event per cycle even with an aggressive OoO core producing events. While instructions dominate the event stream, stack updates also contribute to the monitoring load. Event production is bursty, mandating queueing for pending events; however, a small queue is sufficient for good performance. Unfiltered events are also bursty and are sparsely spaced within an otherwise filterable event stream.
These results point to a programmable filtering accelerator able to keep up with an average monitoring load of one event per cycle even with an aggressive OoO core producing events. While instructions dominate the event stream, stack updates also contribute to the monitoring load. Event production is bursty, mandating queueing for pending events; however, a small queue is sufficient for good performance. Unfiltered events are also bursty and are sparsely spaced within an otherwise filterable event stream.
These results point to a programmable filtering accelerator able to keep up with an average monitoring load of one event per cycle even with an aggressive OoO core producing events. While instructions dominate the event stream, stack updates also contribute to the monitoring load. Event production is bursty, mandating queueing for pending events; however, a small queue is sufficient for good performance. Unfiltered events are also bursty and are sparsely spaced within an otherwise filterable event stream.
The next two sections present our design for such an accelerator. We first present a design that does not support filtering concurrent with the processing of unfiltered events (Section 4), and then extend it to support Non-Blocking Filtering (Section 5).
4. Baseline Filtering Accelerator
We introduce our baseline Filtering Accelerator for Decoupled Event processing, or FADE. FADE is composed of two building blocks: (1) the Filtering Unit, which filters instruction events (Section 4.1), and (2) the Stack-Update Unit, which accelerates stack-update events (Section 4.2). Without loss of generality, we assume that unfiltered events are processed in software on a general-purpose core.
4.1. Filtering Unit
To elide software execution, the Filtering Unit supports two filtering actions, clean checks (CC) and redundant updates (RU). Clean checks are based on the observation that most of the time applications behave as expected and the metadata match the expected invariant (e.g., memory references are to initialized memory). Redundant updates are based on the observation that metadata are stable as propagation handlers commonly update the metadata with the same value (e.g., initialized memory remains initialized even when the actual value in application memory changes). Figure 4(a) breaks down the execution time of instruction events into clean checks and redundant updates.
The Filtering Unit handles an instruction event either as a clean check or as a redundant update. To maximize flexibility and applicability, the Filtering Unit implements three modes of operation: (1) Single-shot filtering either performs a clean check or identifies a redundant update, (2) Multi-shot filtering chains multiple single checks together to determine whether an event is filterable, (3) Partial filtering filters a part of the software handler functionality in hardware, thus reducing the handler’s length.
FADE’s hardware is fully programmable and allows for per-event definition of the filtering rules. In FADE, programmability is achieved by configuring two structures: (1) the event table, which includes per-event filtering rules, and (2) the Invariant Register File (INV RF), which keeps invariant values related to the monitoring task (e.g., unallocated, allocated, and initialized states for MemCheck). These structures are memory-mapped and programmed on a per-application basis.
Figure 5 shows the baseline filtering pipeline, which consists of four stages. Note that striped structures, including the Metadata Write stage, are only for Non-Blocking Filtering as discussed in Section 5. The pipeline works as
follows. First, the filtering rules are read from the event table. Next, the control unit uses the event information and the filtering rules to produce the control signals for subsequent stages. Then, the Filtering Unit accesses the metadata register file (MD RF) and a dedicated metadata cache (MD cache) to obtain metadata. The Filtering Unit may also access the INV RF to monitor-specific invariants, if necessary. Finally, in the Filter stage, the filter logic checks whether the filtering condition is satisfied.
Stage 1: Event Table Read. The filtering accelerator dequeues an event (Figure 6(a)) from the event queue and accesses the event table with the event ID to obtain the event’s filtering rules. An event table entry (Figure 6(b)) includes the following information for each operand (i.e., s1, s2 and d): (1) the valid bit and the mem bit to denote the evaluated operands and the memory operands, respectively; (2) the number of MD bytes to be evaluated; (3) a mask to extract the appropriate bits. Each entry also includes the PC of the software handler to be invoked for unfiltered events.
Each entry includes the CC bit and the INV id for clean checks, and the RU field for redundant updates. The INV id indicates the invariant registers (one for each operand) to be used upon a clean check. The RU field encodes three options. In case of one source operand, the source metadata are directly compared to the destination metadata. In case of two source operands, the source metadata are composed using either OR or AND and then compared to the destination metadata. The rest of the fields are described later.
Stage 2: Control. The control unit processes the information obtained from the event table and uses combinational logic to generate control signals for subsequent stages (e.g., filter logic mux controls, selects and enables for MD RF).
Stage 3: Metadata Read. The Filtering Unit accesses the MD RF, the INV RF and the MD cache, to obtain metadata and invariants values. As application and monitor processes use different address spaces (a desirable feature that enhances system security and reliability), metadata accesses necessitate a translation from the application to the monitor address space. We fold the address translation into the MD cache access. The TLB of the MD cache, similar to M-TLB [2], contains the translation from a virtual application page to the physical page that contains the associated memory metadata. The M-TLB misses are serviced in software.
Stage 4: Filter. The Filtering Unit supports three modes of operation to filter events.
Single-shot Filtering. In a single cycle, the Filtering Unit compares up to three distinct operand metadata to an invariant (clean check), or compares the operand metadata to each other (redundant update).
Examples of single-shot filtering are shown in the first two entries of Figure 6(b). The first event table entry corresponds to a load instruction for MemLeak. FADE handles the event as a clean check (CC=1) and filters the event when both operands are not pointers. In doing so, the metadata of the event operands (i.e., the memory operand s1 and the register operand d) are compared to the non-pointer invariant, which is stored in the third entry of the INV RF (INV id=2). The evaluated metadata are one byte (MD bytes=1). The second event table entry corresponds to a load instruction that is handled as a redundant update.
Figure 7 details the filter logic, which is organized as three identical two-operand comparison blocks (labeled...
cycles by performing one check per cycle, and maintains.
Filtering Unit processes multi-shot events in multiple
event is filterable, FADE supports multi-shot filtering. The
tors that require multiple checks to determine whether an
operands
4.2. Stack-Update Unit
Figure 6(b)): (1) the
table entry requires two additional fields (shown in
entry simple. To encode multi-check events, each event
table entry requires two additional fields (shown in
f1, f2, and f3 in the figure). Each block can compare any
one of three event operands (i.e., s1, s2, and d) to another
operand or to an invariant. Together, the three blocks allow
for a single-cycle evaluation of the most complex single-
shot condition (i.e., comparing each of the three
operands – s1, s2, and d – to a different invariant).
Multi-shot Filtering. To accommodate complex moni-
tors that require multiple checks to determine whether an
event is filterable, FADE supports multi-shot filtering. The
Filtering Unit processes multi-shot events in multiple
cycles by performing one check per cycle, and maintains
one entry in the event table per check, thus keeping each
entry simple. To encode multi-check events, each event
table entry requires two additional fields (shown in
Figure 6(b)): (1) the next entry field, which contains a
pointer to the next entry in the event table; and (2) the
multi-shot bit (MS), which enables multiple checks to be
considered in the final filtering outcome. As shown in
Figure 7, the associated circuit (in bold) includes a clocked
register and a multiplexer, controlled by the MS bit.
Partial Filtering. Partial filtering affords a part of the
handler functionality to be executed in hardware, reducing
the length of the software handler. A software handler may
first perform a check and based on the check’s outcome,
executes either an update or a more complex routine
including multiple checks and updates. FADE accelerates
such cases by performing the initial check in hardware.
To support partial filtering, each event table entry includes a
partial bit (P) (shown in Figure 6(b)), which drives the
selection of the handler PC.
An example of partial filtering appears in AtomCheck,
where the filter logic checks whether a shared memory
location was last referenced by the same thread. Commonly,
the check succeeds, and a simple software handler
is dispatched to update metadata. Otherwise, a complex
handler runs to check whether there is a potential atomic-
ity violation. While both cases require software execution,
the hardware check eliminates the code associated with the
check itself, control flow, and register spills and fills.
4.2. Stack-Update Unit
Stack-update events, which set consecutive metadata
addresses to a predefined value in response to function
calls and returns, are handled in FADE via a dedicated
Stack-Update Unit (SUU). The SUU implements a finite
state machine that takes the stack frame’s starting address
and length as parameters to calculate the address(es) of the
metadata block(s) covered by the stack frame. The SUU
issues writes to the MD cache to set the target range of
addresses to one of two predefined values (one value on
function calls and another on function returns), which are
stored in the INV RF.
5. Non-Blocking FADE
5.1. Observations
Due to true dependencies between monitored instruc-
tions, baseline FADE must stall filtering when an
unfiltered event is encountered. Filtering resumes when
the monitoring system completes the unfiltered event pro-
cessing and the updated metadata become available. This
organization penalizes performance because filtering and
execution of unfiltered event handlers cannot overlap.
To overcome the serial processing of unfiltered events
and subsequent dependent events, we make a critical
observation: while monitors often maintain detailed meta-
data to support complex monitoring analyses, there is a
subset of metadata, which we call critical, that includes
sufficient information to decide whether a subsequent
dependent event is filterable. Importantly, this critical state
can be updated for unfilterable events directly in hardware
in the Filtering Unit. These updates are non-speculative
and are based on predefined rules that can be implemented
in simple hardware.
For instance, for MemLeak, which performs reference
counting to identify memory leaks, an event is filterable
when its operands are not pointers. Therefore, just check-
ing the pointer/non-pointer status of a memory location or
a register suffices to make the filtering decision. For
example, in case of a load instruction, if the source mem-
ory location has a pointer status, the destination register
obtains a pointer status as well. However, to perform refer-
cence counting, MemLeak maintains additional metadata
per register and memory location, which consist of a
pointer to the context (explained in Section 6) of the corre-
sponding malloc. While fundamental to MemLeak’s
monitoring algorithm, these additional metadata are non-
critical from the perspective of the filtering task.
Overall, we observe that (1) there is critical (minimal)
state that can be checked to determine the filtering out-
come in a non-speculative way, and (2) this state can be
updated in simple hardware based on simple pre-defined
rules. Based on these observations, the filtering decision
and the handling of unfiltered event can be decoupled,
thus enabling the design of a Non-Blocking filtering unit
that can continue filtering past an unfiltered event.
5.2. Extensions to the Baseline Pipeline
Figure 5 shows the pipeline extensions (striped) to sup-
port Non-Blocking Filtering. We introduce two new
structures: the metadata (MD) update logic, which per-
forms updates to the filtering-critical metadata for
unfilterable events, and the filter store queue (FSQ), which
stores the updated memory metadata. We also introduce a
new pipeline stage, Metadata Write, where updates to
metadata take place.
**Processing of Instruction Events.** Consider an unfilterable event that just enters the pipeline. The processing in the first three stages (Event Table Read, Control, and Metadata Read) is the same as in the baseline pipeline. In the Filter Stage, while the filtering condition is evaluated, the MD update logic computes the new value for the filtering-critical metadata. The new metadata value is subsequently used only if the filtering condition evaluates to false, indicating an unfilterable event. Otherwise, the new metadata value is discarded.
To determine the logic for critical metadata updates, we observed that critical metadata have minimal state and their propagation follows simple rules. Based on the studied monitors, we provide support for the following rules: (1) propagating the source metadata (s1 or s2) to the destination; (2) composing the new destination metadata from the two source metadata using OR or AND; (3) setting the destination metadata to a constant value, which is stored in an INV register denoted by the Non-Blocking/INV id field in the event table (see Figure 6(b)); and (4) conditionally performing one of the above actions after comparing the source operands to each other, to the destination, or to a constant.
In the Metadata Write stage, the Filter Unit commits updated metadata to the MD RF (for register) or to the FSQ (for memory). Subsequent events with a true dependence on the updated metadata can then obtain them from the MD RF or the FSQ in Metadata Read stage. For memory metadata, the FSQ is searched in parallel with the MD cache. If a matching FSQ entry is found, it is used to satisfy the dependence; otherwise the metadata from the cache are used. To accommodate back-to-back dependencies, forwarding from the Metadata Write stage to the Filter stage is supported.
Eventually, the unfiltered event handler executes and updates both the critical and the non-critical metadata for registers and memory. Once the handler completes, the MD cache contains the updated value for the critical memory metadata (if any) and the corresponding FSQ entry is discarded. Subsequent accesses to these metadata are served by the MD cache.
**Processing of Stack-update Events.** As stack updates change the metadata state, filtering must stop upon a stack-update event to allow the SUU to set the stack frame metadata. Moreover, as pending unfiltered events may reference stack frame-related metadata, the unfiltered event queue must be drained by the consumer prior to stack-update processing.
### 6. Methodology
**Evaluated designs.** We evaluate two FADE-enabled systems, shown in Figure 8. The two-core monitoring system (Figure 8(a)) executes the application and monitor threads on separate cores to maximize concurrency [2]. Filtering takes place next to the monitor core. The single-core monitoring system (Figure 8(b)) is based on a fine-grained, dual-threaded core with a dedicated hardware thread for the application and monitor processes. This design point minimizes resource requirements, but results in higher slowdown because the core resources are shared between the application and monitor.
We also evaluate two unaccelerated systems, similar to the single- and two-core systems presented in Figure 8 but without FADE. In these systems, the application and the monitor communicate through a single queue.
**System configuration.** Table 1 summarizes the configuration of the evaluated systems. Additionally, FADE-enabled systems have a 4KB, two-way MD cache with one-cycle access latency, and a 16-entry Metadata TLB. A sensitivity analysis for these two structures (excluded due to space limitations) shows that this design point offers the best cost-performance ratio. The event table has 128 entries, covering the heavily used subset of the modeled ISA (SPARC). The event queue and the unfiltered event queue is 32 and 16 entries, respectively. Unless otherwise specified, experiments use Non-Blocking FADE.
**Simulation.** We use Flexus [26] for cycle-accurate full-system simulation. Flexus extends Simics with timing models of multithreaded cores, caches, and interconnect. For our evaluation, we use the SMARTS sampling methodology [27]. Our samples are drawn over one billion instructions of the monitored application. As our benchmarks are organized as a collection of loops, we sample over an execution interval that covers multiple iterations. For the parallel benchmarks, we follow the same approach to cover a representative part of the benchmark’s parallel
section. For each measurement, we launch simulations from checkpoints with warmed caches (including the MD cache), and run 100K cycles to achieve a steady state of detailed cycle-accurate simulation before collecting measurements for the next 50K cycles.
**Power and Area.** To estimate FADE’s area and power, we synthesize our VHDL implementation with Synopsys Design Compiler. We use TSMC 45nm technology (core library: TCBN45GSBWP, V_DD: 0.9V) scaled down to 40nm half node, and target a 2GHz clock frequency. For the MD cache, we estimate area, power, and latency with Cacti 6.5 [14].
**Monitors.** To demonstrate FADE’s generality, we use a suite of five diverse monitors that cover a range of memory, security, and concurrency bugs.
**AddrCheck** [16] checks whether memory accesses are to an allocated region. The critical metadata encode two states (allocated or unallocated) per memory location, while the non-critical metadata include book-keeping information for bug reporting. FADE filters accesses to allocated data through clean checks.
**MemCheck** [16] extends AddrCheck to detect the use of uninitialized values, and **TaintCheck** [17] detects overwrite-related security exploits. For critical metadata, MemCheck has three metadata states (i.e., unallocated, uninitialized, and initialized) and TaintCheck has two metadata states (i.e., untainted and tainted). Non-critical metadata may include information related to origin tracking [1] or other bookkeeping information. FADE performs clean checks for legitimate accesses and filters redundant updates when metadata remain unchanged.
**MemLeak** [13] identifies memory leaks through reference counting. The critical metadata consist of the pointer/ non-pointer status of each register and memory word. Non-critical metadata consist of a pointer to the corresponding malloc’s context and a null value otherwise. The context includes a unique ID, PC, and a reference counter. FADE performs clean checks to filter events with non-pointer operand values.
**AtomCheck** [12] detects atomicity violations by checking access interleavings. For this purpose, it keeps track of the last access by each thread to each application memory location. AtomCheck maintains one byte of critical metadata per application word with the thread status bit and the thread id. Furthermore, it keeps non-critical metadata including the type (Read/Write) of the last access by each thread in local per-thread tables. AtomCheck is accommodated by Partial filtering, as explained in Section 4.1.
**Benchmarks.** For all monitors, except AtomCheck, we use the SPEC2006 integer benchmarks with reference inputs. These CPU-intensive benchmarks stress the monitoring system with a high event generation rate. For TaintCheck, we use the benchmarks (astar, bzip, mcf, omnetpp) that have tainting propagation and we exclude the rest. For AtomCheck, we use five multithreaded benchmarks: water and ocean from the SPLASH suite; and blackscholes, streamcluster, and fluidanimate from the PARSEC suite. Each benchmark has four threads that run on one core in a time-sliced manner. All benchmarks use 32-bit binaries.
### 7. Evaluation
#### 7.1. Filtering Efficiency
Table 2 shows that FADE filters 84-99% of all instruction event handlers. AddrCheck has the highest filtering ratio because nearly all instruction events can be filtered via clean checks as the applications access allocated memory. In contrast, TaintCheck has the lowest filtering ratio of 84%, as it performs value propagation that results in long propagation chains with a higher frequency of metadata updates.
#### 7.2. FADE versus Unaccelerated System
Figure 9 depicts the performance of FADE versus the unaccelerated monitoring system. In both systems, application and monitor tasks execute in dedicated hardware threads of a dual-threaded 4-way OoO core. Performance is normalized to an unmonitored (application-only) system.
In general, for the unaccelerated systems, we observe an average slowdown of 4.1x, across monitors. For memory-tracking monitors (AddrCheck, AtomCheck), the average slowdown is 2.5x, while for propagation-tracking monitors (MemCheck, MemLeak, TaintCheck), the slowdown is 5.8x. FADE reduces the slowdown significantly for all monitors, with an average slowdown of 1.5x. FADE’s slowdown is 1.3x and 1.6x for memory- and propagation-tracking monitors, respectively.
Figure 9(a) shows AddrCheck’s performance, which is generally good on both systems as the monitor just processes non-stack memory instructions. In the unaccelerated system, AddrCheck’s slowdown ranges from 1.2x to 2.9x, with an average of 1.6x. FADE reduces the slowdown to an average of 1.2x by filtering out nearly all monitored events.
Figure 9(b) shows the results for MemLeak, a heavy-weight propagation-tracking monitor. In the unaccelerated system, we observe slowdown ranging from 3.4 to 11.5x, with an average of 7.4x. We note that the benchmarks with a high monitored IPC (e.g., 1.2 for bzip) generate events faster than those with a low monitored IPC (e.g., 0.2 for
<table>
<thead>
<tr>
<th>Table 2. FADE’s filtering efficiency.</th>
</tr>
</thead>
<tbody>
<tr>
<td>AddrCheck</td>
</tr>
<tr>
<td>AtomCheck</td>
</tr>
<tr>
<td>MemCheck</td>
</tr>
<tr>
<td>MemLeak</td>
</tr>
<tr>
<td>TaintCheck</td>
</tr>
</tbody>
</table>
mcf), resulting in higher slowdown due to the increased pressure on the monitor. FADE significantly reduces the slowdown to an average of 1.8x, thanks to its high filtering ratio and the hardware-accelerated stack-update unit. The highest slowdown is observed on astar (2.2x) and gcc (3.3x), which are characterized by a low filtering ratio (70%) and must frequently drain the unfiltered event queue at function call/return boundaries (Section 5.2).
Figure 9(c) presents results for AtomCheck. Although AtomCheck is a memory-tracking monitor with a low event generation rate, it has an average slowdown of 3.9x (8.2x max) in the unaccelerated system because the events are costly due to numerous monitoring actions. In contrast, FADE benefits from a high filtering ratio, resulting in an average slowdown of 1.6x (1.9x max).
Finally, FADE reduces the slowdown to an average of 1.4x for MemCheck (similar to MemLeak) and 1.6x for TaintCheck (similar to AtomCheck). Detailed results for these monitors are omitted due to space limitations. Across the five evaluated monitors, FADE reduces the monitoring slowdown to an average of 1.5x, versus 4.1x for the unaccelerated system.
7.3. Performance for Different Core Types
To better understand the effects of core microarchitecture on monitoring performance, we evaluate the unaccelerated and FADE-enabled systems with different core types. Figure 10 summarizes the performance for three core microarchitectures; in-order, 2-way OoO, and 4-way OoO averaged across all benchmarks.
For the unaccelerated monitoring systems (dashed bars), we observe a reduction in performance ranging from 7% to 51% for simpler core microarchitectures as compared to the 4-way design. Although the applications generate up to 2x fewer events per cycle on the in-order core than on the 4-way OoO core, each event handler executes up to 3x faster on 4-way OoO because event handlers consist of instruction sequences with high cache locality, resulting in high IPC on aggressive cores. Thus, we conclude that monitors are sensitive to the core microarchitecture.
In the FADE-enabled system (solid bars), performance is less dependent on the core type. For example, MemCheck performs marginally better on the simple microarchitecture (average slowdown of 1.2x on in-order versus 1.4x on 4-way OoO), showing that filtering leaves little work for the monitor core and the core microarchitecture is less important.
7.4. Single-Core versus Two-Core System
Prior work [2, 25] has suggested utilizing otherwise idle cores to accelerate the monitoring task. We next evaluate this design point in the context of FADE.
Figure 11(a) compares the performance of single-core (dual-threaded) and two-core monitoring systems. Both are FADE-enabled and feature a 4-way OoO microarchitecture. The results indicate that the two-core design outperforms the single-core option by 15% on average (28% max) by eliminating resource contention between monitor and application threads. As the second core is expected to provide a theoretical speed-up of 2x over the single core, we investigate the reason for the limited benefit of the second core.
Figure 11(b) breaks down the utilization of the two-core system into three categories: cycles in which (1) the application core is idle because the event queue is full, (2) the monitor core is idle because FADE filters all events,
and (3) both application and monitor cores are utilized. As the figure shows, 48% to 97% of the time, one of the two cores is idle, as FADE either filters the incoming event stream (idling the monitor core), or the monitor core processes unfiltered events (backpressuring the application core). With both cores utilized only 22% of the time, on average, the benefit of the second core is clearly limited.
7.5. Benefits of Non-Blocking Filtering
To show the benefit provided by Non-Blocking Filtering, Figure 11(c) compares the performance of Non-Blocking FADE (used in the studies above) to the baseline FADE that stalls on each unfiltered event.
We observe that Non-Blocking Filtering improves the performance by 2x for AtomCheck, MemLeak and TaintCheck, which have relatively low filtering ratios (<87%), and by 1.1x for AddrCheck and MemCheck, whose filtering ratio is high (>98%). The benefit of Non-Blocking FADE comes from overlapping the filtering actions with the unfiltered events processing.
7.6. Area and Energy Efficiency
To model FADE’s area and power costs, we synthesized our RTL design in TSMC 40nm technology. Our design includes a 128-entry event table, a 32-entry event queue, and a 16-entry unfiltered event queue. Synthesis results show an area of 0.09mm² and a peak power consumption of 122mW. To estimate the area and power requirements of the 4KB MD cache, we use CACTI. We find the area cost of the cache to be 0.03mm², peak power of 151mW, and an access latency of 0.3ns.
8. Related Work
Prior work proposed hardware support for instruction-grain monitoring. Early proposals sacrifice flexibility by hardwiring the monitoring policy [4, 21]. Other proposals [5, 23, 24] allow for a number of different monitoring policies but their pipelines can only accommodate fixed-size metadata. Monitor-specific proposals include support for race detection [8], and spatial memory safety for C/C++ programs [7, 15]. DISE [3] instruments the instruction stream on-the-fly by injecting the instrumentation code directly into the pipeline. Another class of tools is based on watchpoints [11, 20, 28]; however, it cannot support certain monitors (e.g., propagation trackers) [28], and can degrade performance for certain metadata layouts [11].
Event filtering has been considered in prior work as a way to accelerate monitoring; however, prior proposals only considered filtering for a narrow range of behaviors by (1) targeting only a specific monitor [7, 19], (2) supporting only metadata of specific size [23], or (3) sacrificing bug coverage [2]. This work advances the state-of-the-art by (1) providing generalized support for filtering including partial and multi-shot filtering, (2) accelerating bulk metadata management (i.e., stack updates), and (3) proposing Non-Blocking Filtering.
9. Conclusions
This work introduced FADE, a Filtering Accelerator for Decoupled Event monitoring. The proposed design exploits common behavior across monitors to provide simple, programmable hardware for handling common application events while delegating infrequent complex events to software for maximum flexibility. To maximize throughput and avoid stalls in the presence of unfiltered events, FADE employs Non-Blocking Filtering — a hardware-assisted mechanism for concurrent processing of filterable and unfiltered events. Our results showed that FADE can filter 84-99% of application events across a range of monitors, resulting in an average slowdown of only 1.2-1.8x, thereby making instruction-grain monitoring practical.
10. Acknowledgements
We would like to thank Stavros Volos, Djordje Jevdijic, Cansu Kaynak, Almutaz Adileh, Javier Picorel, and the anonymous reviewers for their insightful feedback on earlier drafts of the paper. This work was partially supported
References
|
{"Source-Url": "https://www.pure.ed.ac.uk/ws/files/18465156/Fytraki_et_al_2014_FADE_A_Prgrammable_Filtering_Accelerator.pdf", "len_cl100k_base": 10261, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 42112, "total-output-tokens": 12667, "length": "2e13", "weborganizer": {"__label__adult": 0.0005464553833007812, "__label__art_design": 0.0009179115295410156, "__label__crime_law": 0.0005655288696289062, "__label__education_jobs": 0.0005068778991699219, "__label__entertainment": 0.00015163421630859375, "__label__fashion_beauty": 0.0002682209014892578, "__label__finance_business": 0.0002818107604980469, "__label__food_dining": 0.0004322528839111328, "__label__games": 0.0015916824340820312, "__label__hardware": 0.0181121826171875, "__label__health": 0.0005130767822265625, "__label__history": 0.0005354881286621094, "__label__home_hobbies": 0.0001875162124633789, "__label__industrial": 0.0011129379272460938, "__label__literature": 0.0002696514129638672, "__label__politics": 0.00042176246643066406, "__label__religion": 0.0007572174072265625, "__label__science_tech": 0.1658935546875, "__label__social_life": 6.330013275146484e-05, "__label__software": 0.01383209228515625, "__label__software_dev": 0.79150390625, "__label__sports_fitness": 0.0003676414489746094, "__label__transportation": 0.0008778572082519531, "__label__travel": 0.00026297569274902344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54551, 0.01796]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54551, 0.14998]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54551, 0.88734]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4357, false], [4357, 9341, null], [9341, 13520, null], [13520, 17729, null], [17729, 22311, null], [22311, 25853, null], [25853, 31908, null], [31908, 36447, null], [36447, 41736, null], [41736, 45124, null], [45124, 48918, null], [48918, 54551, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4357, true], [4357, 9341, null], [9341, 13520, null], [13520, 17729, null], [17729, 22311, null], [22311, 25853, null], [25853, 31908, null], [31908, 36447, null], [36447, 41736, null], [41736, 45124, null], [45124, 48918, null], [48918, 54551, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54551, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54551, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54551, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54551, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54551, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54551, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54551, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54551, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54551, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54551, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4357, 2], [4357, 9341, 3], [9341, 13520, 4], [13520, 17729, 5], [17729, 22311, 6], [22311, 25853, 7], [25853, 31908, 8], [31908, 36447, 9], [36447, 41736, 10], [41736, 45124, 11], [45124, 48918, 12], [48918, 54551, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54551, 0.02593]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
652a9ccd37599102b71dc3df2d0792667a3d4a15
|
THE VARIETIES OF CAPABILITIES IN FLEX
Authors: J.F. Cussins & I.M. Foster
ROYAL SIGNALS AND RADAR ESTABLISHMENT
Memorandum 4042
TITLE: THE VARIETIES OF CAPABILITIES IN FLEX
AUTHORS: I.F. Currie and J.M. Foster
DATE: April 1987
SUMMARY
Capabilities in Flex are first class data objects which allow one to define and limit the right to access data or obey an action. Their use extends from mainstores to filestores and across networks of Flexes. This paper gives a general description of how Flex capabilities are implemented, controlled and used. They are classified into four varieties, mainstore, filestore, remote and universal. Each of these varieties has its own range and lifetime designed to combine consistency, integrity and utility with implementability.
1. Introduction
It is common in everyday life for the possession of an object to confer some rights or privileges on the holder. Examples of such objects abound: an airline ticket gives one the right to travel on a particular flight; a cash-point card gives one the right to use an automatic cash-dispensing machine; and, of course, a sufficiency of bank-notes gives the holder all that money can buy. These objects can be regarded as "capabilities" for the rights they confer. Often, the relation between capability and right is one to one; in other words, one has the right if and only if one possesses the capability. Clearly, a capability must be difficult to forge otherwise the value of the right which it represents is debased.
In the computing context, a capability confers the right to obey an action like reading or writing data or running a program. If possessing a capability is necessary to perform an action, we have a good basis for solving many problems associated with the security, privacy and integrity of computer systems as well as a solution to more mundane problems such as the detection and diagnosis of program errors. Just as with non-computer capabilities, the control of the creation and distribution of capabilities is crucial; there is little point in trying to enforce a discipline which is easy to circumvent either by accident or design.
A paradigm for computer capabilities is given by a simple example where the capability allows one to read and write to a contiguous area of store; it might be implementing a vector in a high level language. The data within this area would only be accessible by instructions which made use of the capability to this area. For example, using the notation given in figure 1, an instruction to load the fourth word (say) of the area into a register would have to include the capability cap and the displacement 4 in its operands, perhaps something like:
\[ \text{loadreg } 4, \text{capword} \]
where capword (perhaps a register) contains the actual capability cap. Of course, this instruction is only legal if the size of the area is greater than four words.

In most capability architectures [1,2,3] the control and storage of capabilities to prevent misuse has usually been implemented by making capabilities special objects; only very privileged programs can create them and normal programs can only move them about in a circumscribed manner. In these architectures, capabilities exist in special registers or in special areas of memory; for example, capword in figure 1 would have to be a special purpose register or a word in a block of storage which contains nothing else but capabilities. This implies that one cannot easily hold capabilities and normal data in the same data structure.
Even at the lowest level of programming, one can be embarrassed by this separation of "scalar" data from capabilities. For example, the word-pair labelled reference in figure 1 could be usefully interpreted as a pointer to word 4 of the block whose capability is cap. The inability to store this object in contiguous words places a heavy burden on the programmer in implementing things like reference parameters to procedures. A similarly inconvenient object is the word-triple labelled subvector which might be a representation of a vector of three words starting from word 2 of the block given by cap. Any particular solution to this problem which gives special representations to references or vectors is inadequate since almost any juxtaposition of capabilities and scalars is required to create general data and program structures; these cannot all be anticipated in the initial design of the system.
This "apartheid" of scalars and capabilities is one of the reasons for the lack of success of the above capability machines. The difficulty of finding solutions to problems posed by the representation of arbitrary data structures caused programmers to flatten them out into a small number of blocks so that only a few capabilities had to be handled. This obviates most of the usefulness of capabilities by only giving very coarse-grained discrimination and protection. In addition, the system programmer finds that he cannot construct an object-oriented system because the components of an object like the reference and vector objects given above must be distributed across several blocks in store. In other words, these machines could only be used as rather inconvenient standard machines with few or none of the advantages of capabilities showing through.
Another drawback of these machines was their assumptions about how store was allocated. One assumption made was that compilers would check local objects so that capabilities were only needed for major objects. Thus the creation of new capabilities by allocating new storage would be done in large chunks and would be a fairly rare event, requiring a relatively expensive system call. However, this overlooked the possibility of the transmission of structured data between "programs" compiled independently. It
also meant that compilers had to be correct, making the development of new compilers more difficult. To reap the benefits of a capability structure, one would like to use capabilities in as fine grained and dynamic fashion as possible. For example, I wish to implement lists as capabilities so that the cons function produces a new capability to a newly allocated block of two words containing the head and tail values; a long-winded system call to allocate the new block would be intolerable.
The Flex architecture [4, 5, 12] uses capabilities in a manner which does not suffer from these limitations. Capabilities in Flex are first class objects - they can be created by non-privileged programs and can be loaded and stored in just the same way as one loads or stores scalar objects without requiring specialised registers or storage areas. Instead of recognising capabilities by where they are stored, Flex capabilities are distinguishable data objects. For example, in Flex mainstore, each word has an ext a "tag" bit. Scalar words have their tag bits zero, and capabilities are words with set tag bits.
The tag bits are not involved in the normal arithmetic or logical operations in Flex; indeed these operations are only legal between scalars. Otherwise, the tag bits are copied consistently in the loading and storing operations in Flex. Thus, each word containing cap in figure 1 has the tag bit set and each word containing an integer has it cleared. When a word with a tag bit set is produced ab initio, the interpretation of this word is always such that it represents a new capability different from all others. In other words, if you possess a capability, you either created it yourself or else you were given it by somebody else via some other capability - you cannot forge capabilities.
In the implementations of Flex up to the present (March 1986), the capability rules in mainstore have been enforced by micro-coding a Flex instruction set on a micro-programmable machine, the most recent being the ICL Perq workstation [12]. In this instruction set, a new mainstore capability can created by obeying one of a set of unprivileged instructions to allocate the space and set it up depending on the type required. Capabilities other than mainstore ones exist in Flex; their access and creation rules are enforced by a mixture of software (both privileged and unprivileged) and firmware. These capabilities are represented in mainstore by one particular kind of mainstore capability; in addition they possess representations in other media like filestore or networks.
Capabilities as data objects form the basis of the Flex system and appear in many different guises and representations in mainstores, in filestores and across networks. Since one can handle them quite freely and create them to represent arbitrarily complicated
objects, the Flex system is an "object oriented" system. It uses and creates objects directly without the need for intermediaries like names, directories or contexts.
Ideally one would wish that, once a unique capability has been created, it should exist as long as it is required; in other words as long as it is referenced. Practically, this requirement cannot be met - unexpected machine errors over a network are likely to wreck any such scheme. This is not to say that one should abandon the ideal, but rather that one should approach it as closely as possible and ensure that if ever one does "lose" capabilities by machine error, one always has a consistent fall-back position. As a trivial example, a hardware error in a single processor system would cause the loss of those capabilities in the main store which are part of the currently running program. This has no consequential inconsistencies if these capabilities cannot exist outside the main store; however, if they could have been written to file store, say, we would have had references from file store to a non-existent mainstore, which could be disastrous.
In order to control the consistent existence of capabilities, the Flex system classifies them into four main groupings, each with different rules for where they can exist and how, if at all, they can be transmitted. These groups are:
1. Mainstore capabilities
2. Filestore capabilities
3. Remote capabilities
4. Universal capabilities
A mainstore capability exists only in one mainstore, cannot be transmitted elsewhere and implements things like arrays and procedures in running programs; it is obviously a rather temporary thing, disappearing when the machine is switched off. A filestore capability retains its meaning from session to session, can exist in one filestore or in the mainstore of any processor which can access that filestore directly, can be transmitted to and from this one filestore or between these mainstores and is used to implement things like files, modules, etc. A remote capability can exist in any mainstore, can be transmitted between mainstores and is a means of performing an action on some processor from another processor on a network. A universal capability can exist anywhere in the Flex world, in any Flex filestore, mainstore or memory and represents a Flex object which is common across all Flex systems, for example, some version of a commonly used compiler.
This classification of capabilities has been derived from experience in the construction and use of various Flex systems. These have included a system with several processors connected
to a common file-store and one with disjoint file-stores connected across a network. The remaining sections of the paper highlight some of the important aspects of the varieties of capabilities and their uses in the Flex system.
2. Mainstore capabilities
A mainstore capability exists in only one mainstore and cannot be transmitted elsewhere. If its machine is switched off it will disappear.
All capabilities accessible to a running Flex program are represented by pointers to disjoint blocks of store. A pointer is simply a word with its tag bit set to distinguish it from scalar words. It contains the "address" of a block which contains the size of the block and type of the capability in its first word as shown in Figure 2. The quote symbols are used deliberately here since this notion of an address does not enter further into the Flex architecture. One can access the information in the block only if one has the capability; the actual physical address of the block can change and is both useless and irrelevant. These capabilities are unforgeable in the sense that one cannot create a word with the tag bit set which is the same as another except by copying that word. When one creates a new capability it is guaranteed to be different from all others.
One of the most basic kinds of mainstore capability allows one to read and write words into memory. There are several instructions in the Flex repertoire which allow one to create new capabilities of this memory type. For example, there is an instruction which returns a capability to read and write a new block of given size; another allows one to pack away a value consisting of some number of words into a new block and deliver a capability to read and write into that block. The read/write capability in figure 2 could have been created by one of these instructions, and the read-only capability could only have been created by another instruction from this read/write capability. Other instructions allow one to read or write words (any mixture of scalars and capabilities) in the block via the read/write capability while only allowing reading via the read-only capability.
The access rules for memory blocks are just one example of the kind of rights and restrictions conferred by capabilities. The algorithms defining these rules are very simple - one can only read and write within the bounds of block defined by a memory capability. Clearly, one could imagine other algorithms defining other access rules, and Flex does have other kinds of capabilities with other fixed access rules. However, Flex also allows one to create capabilities where the algorithm is chosen by the programmer by using the most general kind of mainstore capability, the procedure.


The procedures are just Landin's closures [6], and using them, any arbitrary set of rights or restrictions can be implemented. Such a closure is created by an instruction which binds Flex code with values which form the non-locals of the procedure; both code
2.2
and non-locals being themselves capabilities as in figure 3. On calling the procedure, the code in the code block can access the non-locals implicitly; for example, there is an instruction to load the \( n^{th} \) word of the current non-local block without having to extract the capability explicitly. Two other areas are similarly accessible, namely the locals and the constants (in the code block) of the procedure. The locals block contains the local variables of the procedure and link information; it is produced by the call instruction either by generating it afresh or else by retrieving one which is finished with by a previous call. Note that the same code capability can be shared between many procedures; indeed code blocks are loaded in such a way so that there is never any more than one copy of a code block at any time in mainstore regardless of how many programs are using it.
The possession of a procedure capability does not allow one to read either its code or non-local values; one can only obey the code with these non-local values bound to it. This means that the code can control the kinds of access that are possible to the non-local values without the user of the procedure being aware of their representation or even of their existence. For example, it is trivial to construct a pair of procedures, push and pop, which implement the classical stack operations by sharing the same non-locals as in figure 4. Here the underlying data structure which contains the values in the stack is in fact a list but is completely hidden and is impossible to access in any way other than by calling the procedures. Procedures defining abstract data types or other kinds of packages are usually implemented in Flex sharing non-locals like this.

*Figure 4 - capabilities implementing the stack procedures*
It is important to note that the creation of these procedures for abstract data types is essentially a dynamic process. For example,
the natural way in Flex to construct the stack procedures in figure 4 would be to have another procedure `make_stack`, say, which when called would deliver a pair of procedures to implement a new stack, different from all others. Each pair would have different non-local blocks, but all of them would have the same code blocks given by the capabilities `push-Code` and `pop-Code`; these capabilities would probably be found in the constants of the code block in the procedure `make_stack` which simply closes them with a newly generated non-locals block to give the two stack procedures. Once created, the `push` and `pop` procedure capabilities are completely independent of `make_stack` and can exist even when `make_stack` disappears.
An interesting kind of procedure capability is one which has type:
\[(\text{Key,Info}) \rightarrow (\text{Key,Info})\]
and body:
\[
\lambda \text{k,i} . (\lambda x . \{ \text{if } x = k \text{ then } t \text{ else FAIL fi} \})
\]
ie given a key and some information, it produces another procedure which will give back the information if and only if the parameter of its call is the same as the key. Since the key could be a capability and since capabilities are unforgeable, this gives a completely safe way of passing around sensitive information. Only those procedures which possess the key (probably in their non-locals) will be able to get at the information. Thus, the information can be transferred safely between trusted procedures via an untrustworthy intermediary. This concept is used in representing the other varieties of capabilities (namely filestore, remote and universal) in mainstore, so that they can be held by untrustworthy program without fear of compromising their access rules. This use is so important that it has been particularised to form another type of capability called a "keyed" capability. This resembles a simple memory capability except that it can be locked so that its contents of the corresponding block are completely inaccessible. The only way to unlock the block is by knowing the contents of the first word of the block; thus the first word is the key and the remainder of the block is the information.
To reap the maximum benefit from its use, the lifetime of a mainstore capability is at least as long as it is required; i.e., if one possesses a capability, it must be alive. In turn, this implies that a block of physical mainstore can only be reused if there is no capability which points to it. To discover this, it is necessary to do a general trace and garbage collection when the physical limits of the mainstore are reached. In the current implementations of Flex, this is done in the micro-code, as are all the other manipulations of physical addresses to produce capabilities. As mentioned before, a mainstore capability can only exist in one mainstore; one cannot transmit mainstore capabilities to other mainstores or filestores. This, together with the use of
the tag bits to distinguish between scalars and capabilities, allows the use of a fast garbage collection algorithm which is linear in time in all of its variables.
The Flex instruction code which is interpreted by the micro-code clearly takes full advantage of the properties of the capabilities which it creates and manipulates. It has also been designed for ease of use by high level languages. There is no primitive level assembler for Flex; all of the programs written for and running in Flex have been compiled from some high level language. The instruction set allows one to produce compact code; for example, there are only 7 instructions (including the procedure-exit instruction) occupying 13 bytes in a straightforward, unoptimised translation of pop_code given in figure 4. Leaving aside the procedure-call and -exit instructions, the obeyed code would involve about 12 memory accesses including the instruction fetches in the Perq implementation of Flex, each access taking 75ms for a 32 bit word (the Perq is actually a 16 bit word architecture, but 32 bit store access has no extra penalties on even word boundaries). These accesses effectively define the time required to obey the instructions, since other actions required to be performed by these instructions (for example, checking the access rules) are mostly hidden behind the operand fetches. The code in push_code is slightly smaller in size and also in time, provided that the cons operation does not provoke a garbage collection. The procedure-call instruction might also provoke a garbage collection if there was no workspace available for the locals of the procedure call. If workspace was available, then it takes 10 to 12 memory accesses to obey this instruction which deals with the link and sets up the new local areas; the procedure-exit instruction taking much the same time to do the inverse. In summary, the time taken by the pair of procedure calls given by the somewhat nugatory expression, push(pop), is about 65 store accesses, provided no garbage collection takes place.
The time taken by garbage collection obviously depends on the mix of blocks and capabilities in store at the time of garbage collection. Some blocks never have capabilities in them (for example, the block defining the raster display in the PerqFlex), while others could be filled with them. The PerqFlex garbage collector is a compacting one and all other Flex processing stops while it is active; of course, some interrupt processing and data transfers, including keyboard interactions, continue at a lower level. The time taken by this garbage collector is given approximately by:
$$(2\times (L-F) + V + 5\times C + 2\times B + 4\times A)$$
memory accesses
where $L$ is the total number of live words, $V$ is the number of these which could be capabilities and $C$ is the number which are capabilities. $B$ is the total number of blocks before garbage collection and $A$ the number of live blocks. $F$ is the number of
words which are not moved in the garbage collection, either because they are always required at the bottom of memory (e.g., the raster image) or simply because no free space is recovered below them. In a 2 Mbyte Perq, the actual time taken by one garbage collection averages out to about 1.3 ± 0.2 secs in typical use. Such a use might be running two long Algol68 compilations in parallel with normal text editing; in this case garbage collections occur about once every 4-5 secs, each collection recovering an average of 1.2 Mbyte of free store. These figures do not change appreciably when running three or even four compilations in parallel since the code is shared between the processes and this compiler (not originally targeted for Flex) is profligate in its use of temporary storage compared with the sizes of more permanent tables that it needs to maintain in mainstore across one compilation.
3. Filestore capabilities
A filestore capability can exist in only one filestore, or in the mainstore of any computer which directly accesses that filestore. It can be transmitted between mainstores and the filestore. It retains its meaning from session to session. The word "filestore" is used here for want of a better term. Its use carries no implication of the properties of existing filing systems, but simply defines memory which persists in some permanent form. Other terms such as "data-base" or "persistent heap" might equally well be used but would carry just as many unwanted connotations.
In analogy to mainstore capabilities, a filestore capability is a "pointer" to a block of data on a particular filestore. This data can include any scalars, any universal capabilities, and any other filestore capabilities belonging to the same filestore. Note that cross-filestore capabilities are not allowed; any filestore capability in a filestore "points" to another block in the same filestore. In mainstore, the minimum size of a block is one word (consisting of just the overhead word). On filestore the granularity is bigger depending on the implementation involved; in PerqFlex the minimum size of block is 32 bytes. Filestore capabilities have types similar to mainstore capabilities; the procedures which read the data corresponding to a filestore capability do so by producing a mainstore capability of the same type. In particular, filestore procedures can only be read to produce mainstore procedures. This means that private information can be safely hidden behind the interface of a filed procedure; for example, logging on to Flex is done simply by calling a filestore procedure which has things like passwords and dictionaries safely hidden in its non-locals.
As mentioned previously, a filestore capability is represented in mainstore by a keyed capability; its corresponding block contains information on how to retrieve the data. The key to this locked block is a characteristic of the filestore and the basic outputting instructions and procedures will not allow it to be transferred to an alien filestore. On the filestore itself, or in any transmission medium in networks, non-mainstore capabilities are distinguished from scalars in much the same way as mainstore capabilities, using extra tag bits or bytes; in fact, the total size of a PerqFlex filestore capability on filestore is 12 bytes which are recognisably different from non-capabilities. When inputting these capabilities into mainstore, care is taken that only one copy of the block corresponding to a particular capability occurs in each mainstore, that is, all the copies of a mainstore representation of a non-mainstore capability point to the same keyed block. The search to ensure this is fast and economic.
takes an average of 50 store cycles to establish that the capability is a new one and obviously less to find it if it is not. Filestore capabilities are not rare objects in the Flex mainstore; on average there are about 2000 alive at any one time, a large proportion arising from the fact that most mainstore code blocks have an equivalent on filestore. The uniqueness property is used mainly as an aid to short-circuit the traffic to and from filestore in the case of filestore capabilities; the implementation and use of remote and universal capabilities is more critically dependent on it.
A standard filestore capability is created by writing data of appropriate type to a filestore and receiving in return the filestore capability to read that data. Note that there is no notion here of writing to a particular place on filestore - it is a "write-once" operation. Since the data written away can include other capabilities, one can form non-circular tree structures (strictly speaking, acyclic graphs) of arbitrary complexity on filestore. This tends to mean that the system is quite economical about the amount of traffic to and from the filestore. For example, an editable file is implemented on Flex as an object of an abstract data type called an Edfile; procedures associated with this abstract type include an editor of type Edfile + Edfile and a lister of type Edfile + Void. The representation of each Edfile is a single filestore capability to a block which contains other values, including capabilities, as well as text. Figure 5 shows the screen representation given by the editor to an important file in the PerqFlex system. This particular example is rather short of plain text; most of it displays non-textual values. Each of the boxes, eg "Mathematical routines", is the screen representation of a non-textual value in the file; the text in the box, namely "Mathematical routines", is just a convenient label or banner for the value. In fact, each of the non-textual values illustrated here happens to be a disc capability which is an Edfile containing further information, usually including yet more Edfiles and the banners for them are indeed unique in this file [see the treatment of this file as a universal capability]. The contents of any inner Edfile can be displayed by calling the editor recursively by pointing at it and pressing the appropriate key. A listing of the file shown in figure 5, including the expansion of its inner Edfiles covers 125 A4 pages, yet any part of it can be usefully reached by no more than five or six key-presses following the tree structure of the Edfiles, transferring no more than the equivalent of six pages from filestore.
3.2
By the same token, producing a new Edfile only involves writing away those inner Edfiles which actually change. When one exits from a call of the editor which actually makes some changes to the data rather than simply displaying it, the new data is written away to create and deliver a new Edfile; if it happened to be an inner call (on Mathematical routines, say) then this Edfile replaces the old value in the display using the same banner. The original file given by the parameter of the call has not been altered in any way; it is still available by the same method as was used to get it in the first place. Given the result of the outer call of an edit, committing the change is usually done by giving a name to the new Edfile in some dictionary; it could be the same name as used to get the parameter of the call to give a later version of the file.
The non-textual values in an Edfile are not necessarily other Edfiles; indeed the file illustrated in figure 5 is principally intended to contain values (and descriptions) of another abstract type called Module. For example, part of the internal file given by Mathematical routines is shown in figure 6; the boxes here are screen representations of these Module values. A Module is, in fact, several filestore capabilities which, when operated on by
appropriate procedures, give the text (as an Edfile), interface specification and compiled code of some program; eg \texttt{exp : Module} gives the exponent routine as a procedure with a real parameter and real result. As it happens this routine was written in Algol68; however the \texttt{Module} is language independent and can be used by other languages. To include the interface entities of a module within a program, one usually puts the \texttt{Module} value itself into the text of the program, rather than its name; ie a program text with \texttt{exp : Module} in its “use-list” would be able to use the exponent routine in the normal manner. The advantage of having the module value rather than its name here is that the program text now effectively includes the texts of all of the modules which it uses, independently of context, so it can be examined once again in a tree-like fashion. Further, since the text and compiled code are bound closely together in a Module, there is never any confusion about the text of a compiled program, even at run time. There are approximately 500 Module values in the file shown in figure 5 (with a total of about 50000 lines of text) accessible for reuse by any programmer in any program. These range from the simple mathematical routines given in figure 6 to things which form part of the system like the compilers, editor and command interpreter. The creator of a Module value has the capability to amend it (by recompiling a corrected text, say). This is another committal operation, this time expressed as an operation on a value rather than by inserting a name in a dictionary.
<table>
<thead>
<tr>
<th>Module</th>
<th>Possible failures</th>
</tr>
</thead>
</table>
| \texttt{arccos : Module} | (8000.3) if ABS parameter > 1.0
Result in range \([0.0, \pi]\)
(8000.6) if parameter < 1.0
None. Result in range \([0.0, \pi]\)
(8000.8) if ABS parameter <= 1.0
(8000.3) if ABS parameter > 1.0
Result in range \([-\pi/2, \pi/2]\)
None. Result in range \([-\pi/2, \pi/2]\)
(8000.7) if ABS parameter >= 1.0
None. Result in range \((-\pi, \pi]\)
(8000.4) if ABS parameter > 2.0 ** 47
Real overflow (0.16) if ABS parameter > 709.7 approx
(8000.5) if ABS parameter > 2.0 ** 47
Real overflow (0.16) if parameter is near multiple of \(\pi\)
Real overflow (0.16) if ABS parameter < 1.12e-308 approx
Real overflow (0.16) if exp too big
\(\text{i.e. if parameter } > 709.0\ \text{approx}\) |
Figure 6 - part of Mathematical routines
The various committal actions for remembering changes to things like files and modules clearly involves some way of overwriting filestore; there has to be somewhere where we can record the state of the filestore, at least when the machine is switched off. This is done in a Flex filestore by having a small number of root variables in the filestore which contain capabilities which allow one to reach all of the accessible filestore. A root variable can contain a single filestore capability; it can be read to give its contents and its contents can be altered by a single unitary operation. Otherwise, it can be used just like any other filestore capability so far as transferring it to and from filestore is concerned. The filestore is only considered to be different when a root changes and a filestore capability remains alive between sessions so long as it can be reached by some path in the tree starting from a root. Thus, a root variable would usually contain a pointer to some dictionary structures and set of modules for a given operating environment; each different log-in operation is likely to give access to a different root.
It is important that a filestore remains consistent within itself; in other words that it is never left in some state of incomplete updating. For example, let us consider how one updates a dictionary derived by a path through the tree structure on filestore starting from some root. One re-constructs a new dictionary and all the tree structure leading to it on filestore before updating this root. Since the process of updating the root is an unitary operation, we know that the filestore is either in its new state or, perhaps because of some failure on the way, in its old state with the old dictionary. One thing is certain, the dictionary is never part new and part old. Thus, provided that different roots contain independent information, that is, they never require to be updated together, the filestore never gets into an inconsistent state. Of course, this is just the most primitive level of consistency control; higher level controls for simultaneous updating and reading still require to be applied. However, any solution of the higher level problems requires that the lower level problem should be solved.
An extremely useful by-product of this method of organising filestore is that a complete history of consistent states of the filestore is potentially available. Since the only thing that is being overwritten is a single filestore capability in the root, one only has to arrange to remember the successive contents of the root. In the general purpose computing context, complete histories are seldom required and the Flex filestores are generally garbage collected and tidied periodically simply to save storage space. However, in the intervening periods, it is still a great boon to be able trivially to reset a file to the value it had a few days, hours or even minutes ago. On the other hand, in a
project environment, the total history might be required for all sorts of reasons to control the project and this could be done in several ways with Flex filestore. Thus, it would be simplest if the total on-line mass storage was big enough to contain all the historical information; if not, one simply keeps off-line copies of the filestore before each garbage collection. In all of the current implementations of Flex the filestore garbage collection is done off-line. The time taken for this garbage collection is roughly proportional to the number of live capabilities in it which can lead to other capabilities; the effect of the other variables is swamped by the time taken to access the disc or discs on which they reside. A PerqFlex filestore on a single 35Mbyte Winchester disc containing about 25000 capabilities, including the standard system, takes 20 minutes to garbage collect, freeing about half the disc; it takes very heavy useage for this to be necessary more than once a week. This is a non-compacting garbage-collector. To compact it the live blocks are sent across a network to another filestore; this takes marginally longer than the non-compacting version.
4. Remote capabilities
Remote capabilities can exist in any mainstore and can be transmitted between mainstores. Mainstore and filestore capabilities allow access to data in local mainstores and filestores respectively; remote capabilities allow access to other mainstores and filestores across a network. Flex uses a remote procedure call [9,10] mechanism for its network; it differs from most other RPC networks in that the possible procedures do not have to be agreed between the machines from the start.
In the current implementations of Flex, the particular type of data held in a mainstore or filestore block associated with a capability is largely irrelevant to its access rules. For example, the access rules for a block containing integers is the same as for one containing floating point numbers; also the type of the parameters and results of a procedure do not effect the kind of checks that are done to ensure that the rules for procedures are obeyed. That is not to say that such checks are never done, but just that they can be done at a higher level of abstraction, for example, within compilers or command interpreters. Even if these checks are done wrongly (because of a bug in a compiler, say) then the integrity and security of the system is not compromised. Thus a small number of different types of capabilities (eg memory, procedure, keyed etc) suffices for mainstore and filestore capabilities. On the other hand, both remote and universal capabilities require to be described and implemented in a much more fine-grained manner using the kind of types found in strongly-typed programming languages. At a primitive level, one can see that a good type description is highly desirable for transfers between computers which use different representations of data (eg in changing floating point format). This use of types arises in the Courier protocol [11]. However, the Flex type structure is much more powerful and allows the transfer of capabilities for dynamically created objects, including procedures. The types of these procedures describe how their parameters and results are to be handled and also makes explicit the high-level protocol of the transactions involved in their calls.
Some of the notation for the high-level type structure in use in Flex has already been introduced. Aside from various primitive types like Real, Int, Char and Void, the "*" symbol indicates a procedure type separating its parameter and result type (eg sin has type Real*Real); structures or records are given by Cartesian products represented by parentheses and commas (eg a complex number might have type (Real,Real)); disjoint Cartesian sums giving unions or variants are represented by the prefix Union on a
list of their possibilities. There are other constructors for concrete types which give representations of various other ways of structuring data in an orthogonal manner; the only one of these used here is Vec to describe a vector (eg a string of characters has type Vec Char). The representations given by abstract data types like Edfile and Module mentioned above are defined by the procedures which operate on them and chosen by the inventors of these abstract types. Flex types were originally conceived as part of the Flex command language [7] and were based on the type structure of ML [8].
A remote capability can be constructed to give a unique, token for a value of any type. This token can be transmitted anywhere in the network and always be decoded to give the original value in the node which created it. In practice, most remote capabilities are remote procedures, since the only generally available operation on remote capabilities is the remote procedure call. Access to any data in a remote machine can always be expressed by calling a procedure in that machine; no extra penalties are really involved since there is no question of a network directly "addressing" a mainstore in analogy to local access. For example, suppose that machine A wished to generate a stack in the mainstore of B to allow A to push and pop integers. Expressed in procedural terms this means that A must have a capability to call a make_stack procedure in B which can create two further capabilities, like those in figure 4, which it can give to A; these two new capabilities will themselves to allow calls of push and pop procedures for the new stack. In fact the make_stack procedure in B has type:
\[ \text{void} \times (\text{int} \times \text{void}, \text{void} \times \text{int}) \]
The first procedure of the result pair is the push for the stack and the second is its pop. In order to call this procedure in B, processor A must possess a remote capability which B associated with make_stack. This remote capability has type:
\[ \text{remote}(\text{void} \times (\text{int} \times \text{void}, \text{void} \times \text{int})) \]
At the remote call, the two result procedures will be sent to A as further remote capabilities and these will be transformed in A into procedures of type:
\[ ((\text{int} \times \text{void}), (\text{void} \times \text{int})) \]
which themselves do remote calls for the push and pop operations.
The transformation of procedures to remote capabilities and back illustrated above is part of the mechanism of how the parameters and results of a remote call are treated. This mechanism is called "flattening"; it transforms structured data into a vector of bytes suitable for transmission across a network. On receipt of this vector, the inverse "unflattening" operation is carried out to reconstruct the data in the remote machine. Some flattening operations will involve the creation of new remote capabilities; these will be transmitted across the network as distinct tokens.
recognisably different from scalar data in the vector of bytes. This discussion is to a large extent independent of the particular lower level protocols required to send these vectors of bytes around the network. However, it will be seen that the security and integrity of the remote capability mechanism depends on that of the lower level protocols. If the protocols are insecure, logically or physically, then remote capabilities can be forged either by accident or design by sending a suitably constructed vector of bytes. Extra safeguards can be built in at the higher level, but these only can reduce the probability of forgery without actually making it impossible.
The action of calling a remote procedure, rem say in A, consists of flattening the parameter, par, of the call, sending the resulting vector of bytes together with the remote capability as a remote call to the originator of rem, B say, and then waiting for the result. The remote machine B will unflatten the parameter to reconstruct par and apply the local procedure associated with rem to it. The result of this local call will then be flattened and sent back to the waiting caller in A which unflattens it to give the result of the remote call. Thus A does something like this:
\[
\text{unflatten\_ans}[\text{rem}](\text{remote\_call}(\text{rem}, \text{flatten\_par}[\text{rem}](\text{par})))
\]
where the answer to the call remote_call is evaluated in B as:
\[
\text{flatten\_ans}[\text{rem}](\text{associated\_proc}(\text{rem}](\text{par}))
\]
The remote capability rem must have been invented in B by calling a procedure called new_remote. A call of new_remote with parameters consisting of a procedure capability and its type will create a new remote capability, different from all others; this procedure will the one given by associated_proc above. The type will be used to define the various flattens and unflattens like flatten_ans and unflatten_ans used above. This is a simplification of what actually happens since Flex also has a system of trapping and analysing exceptions in local programming which is extended over the network to allow remote diagnosis of errors.
It is clear from the above that types must be treated as data to determine how one does the flattening and unflattening operations. This is provided for in Flex types by a new kind of value of type Moded; one can construct a value of type Moded from a combination of any other value and its type. This notion is used all over the Flex system; for example the type of the procedure which finds the meaning of a name in a user dictionary is:
\[
\text{Vec Char} + \text{Moded}
\]
where the result includes both the value and type corresponding to the name given by the parameter. The type of the procedure new_remote given above is:
\[
\text{Moded} + \text{Moded}
\]
where the type given in the answer Moded will always be that in the parameter Moded prefixed by Remote. This is similar to expressing
the type of \texttt{new\_remote} polymorphically as:
\[ \text{ANYTYPE} \rightarrow \text{Remote ANYTYPE} \]
although the usual interpretation of polymorphism (see [8]) is that the procedure is independent of the type of its parameter rather than that it uses the type as data.
The representation of types in mainstore has gone through many metamorphoses; originally they were represented by a simple vector of integers. Now, they are represented by a natural graph structure of capabilities, each different type being represented by a unique keyed capability which gives the constructors and constituent types. This uniqueness is maintained in much the same way as for filestore capabilities and their representation allows efficient means of short-circuiting their translation to and from a filestore representation. The filestore representation of a type is just a pair consisting of a filestore capability and an integer; the integer just indexes one of the types coded in the block corresponding to the capability. This representation is not unique; different filestore representations can give the same type.
The flattened representation of a remote capability in the network must perforce be as some sequence of bytes; it is distinguishable from scalar data in the unflattening operation using the usual bit or byte tags. This sequence of bytes must identify the processor which created the capability in the first place and give a unique identification within that processor. The byte sequence will be used on input into a processor from the network to find (or create if it is not already there) the mainstore representation of the capability. In mainstore, a remote capability is represented as a capability to a keyed block containing, among other things, the identification information. As mentioned above, the inputting mechanism will ensure that there will only be at most one such block for each remote capability in each mainstore. Besides the identification information, the mainstore representation also contains an associated value defining the meaning of the capability. If the processor is the one which created the value in the first place by applying \texttt{new\_remote} to some procedure, then the associated value will be this procedure. If the processor is some other one, then the associated value will be the type of the capability (which formed part of the flattened value sent to the processor). Thus, finding the correspondence between the capability and its meaning is a fast and economic process.
It is clear that it is fairly easy to find suitable flattened representations for values whose types are primitive or constructed from arrays or structures of other flattenable values. What might be less clear is how one can flatten any procedure values involved in the parameters or result of a remote call. To
flatten a procedure value, one constructs a new remote capability by applying new_remote to the procedure being flattened. The flattened representation of this new capability is now the representation of the procedure. To unflatten this procedure representation one merely constructs a new procedure of the same type as the original which does a remote call on the capability as above. This is the way that the push and pop procedures in the remote call of make_stack above are sent from B to A. Sending procedures to and fro like this completely hides the remote capabilities and remote calls involved so that the network is quite transparent.
Such transparency is not always desirable; often one wishes to deal with the remote capabilities directly. For example, modifying the example above slightly, one could write a procedure in B which calls make_stack and applies new_remote to its resulting push and pop to give the result of the procedure; this procedure would have type:
\[
Void \rightarrow (\text{Remote}(\text{Int} \rightarrow Void), \text{Remote}(\text{Void} \rightarrow \text{Int}))
\]
A remote capability to this procedure in A would have type:
\[
\text{Remote}(\text{Void} \rightarrow (\text{Remote}(\text{Int} \rightarrow Void), \text{Remote}(\text{Void} \rightarrow \text{Int})))
\]
A remote call of this would result in a pair of capabilities of type:
\[
(\text{Remote}(\text{Int} \rightarrow Void), \text{Remote}(\text{Void} \rightarrow \text{Int}))
\]
which would have to be explicitly called remotely to give the stack operations. However, processor A could send one of them to processor C and the other to processor D, thus creating a channel of information from C to D via B which is totally independent of A.
Given that one possesses a remote procedure capability it is easy to see how others can be generated from its parameters or results. One way that has been chosen in Flex to start off the process is by means of the procedure first_function of type:
\[
\text{ComputerId} \rightarrow (\text{Vec} \text{Char} \rightarrow \text{Moded})
\]
which allows one to ask a remote node, identified by ComputerId, for a value associated with a name given by the Vec Char. Provided that the remote node allows it, this could give access to any of the facilities available in the remote node. For example, a possible example of the Moded value delivered might be a command line interpreter for the remote machine of type:
\[
((\text{Void} \rightarrow \text{Vec} \text{Char}), (\text{Vec} \text{Char} \rightarrow \text{Void})) \rightarrow \text{Void}
\]
whose first parameter is a procedure to give the command interpreter a line to interpret and the second is one to deal with the response to that command. Similarly, a serial file transfer might be a procedure of type:
\[
\text{Vec} \text{Char} \rightarrow (\text{Void} \rightarrow \text{Union}(\text{Vec} \text{Char}, \text{EndOfFile}))
\]
where the Vec Char parameter is some name to identify the file and the resulting procedure will give successive lines of the file on successive calls.
The lifetime of some remote capabilities can sometimes be deduced by their own actions; for example the capability involved in the serial transfer of a file become meaningless when the end of the file is reached. In these cases the originating processor can discard the capability, safe in the knowledge that any further calls on the capability are mistakes on the part of the remote processor. However, this is not sufficient to provide for the freeing of the resources associated with a remote capability; even in the case of serial transfer a failure in the remote processor might mean that the end of the file is never reached. For this reason, the principal method of freeing these resources once again depends on another kind of garbage collection. In general, a processor which creates a remote capability remembers it so long as another processor possesses it. Every time a remote capability is sent to another node in the network, then this fact is noted by the originating node, either because it sent it itself or else the sending node informed the originator. The originating node can then periodically enquire of these processors whether they still have it; if they do not, then the original processor can forget about it. The method of making this enquiry depends both on the uniqueness property of the mainstore representation of remote capabilities and the storage allocation for mainstores. The remote capability is sent to each remote processor. If there is no longer a copy of it there (ie it has been freed by a mainstore garbage collection), the inputting process will have to re-create the capability; this fact can be recognised and sent to the enquirer. Both these enquiries and the primitive remote calls depend on some lower level of protocol to determine whether the remote processor is still active so that the communication can degrade gracefully when it is not.
5. Universal capabilities
A universal capability is one which can exist anywhere in the Flex world in mainstores, filestores or on networks. It will be used to represent a commonly used object like a compiler, editor, or common module or even a type. The data or program corresponding to a universal capability can exist in many copies distributed throughout the Flex world, usually in local filestores. This description would seem to imply that such an object would have to be constant through time, though we know that compilers, editors and such values are amended from time to time as errors are removed and improvements are made. We therefore choose to think of the value corresponding to a universal capability as an approximation to a Platonic ideal, and to say that these approximations are ordered in the sense that later ones are better than earlier ones. So every operation that can be done with an earlier value must be able to be done with a later one, with a result which is a better approximation than the result of the earlier operation. It is unfortunately impossible to check this property, we merely state that the mechanisms will work if it obtains. This same idea of approximation is applicable not only to objects like programs, where the idea is of a more accurate or less erroneous program or one which applies to more arguments, but also to such values as bank accounts, where tomorrow's statement contains the same information as today's, together with extra information and the only operations allowed are those which specify a date, rather than work in terms of "now".
One aim with universal capabilities is to provide a mechanism to allow better versions of program or data to be transmitted in a fairly passive way. By this is meant there is no need for the originator of the change to tell everybody about some change all at once; the change can be passed from processor to processor independently of the originator. All that is required is that a processor hears about a change from some other processor and it sets in train the actions to ask the other processor exactly what the change was.
The representation of a universal capability consists of an identification which is unique over the Flex world and a version number defining the approximation. Both of these parts are transmitted as the network or filestore representation of the capability. In addition, a processor which knows a meaning for the capability will associate this meaning (usually as some other kind of capability or capabilities) with its mainstore representation in much the same way as the meanings of remote capabilities are held. Clearly it must be possible to change this association if a later version of the capability is encountered, for example, as part of a remote transmission. In general, if a later version is
found as part of a transmission, then the receiver will ask the sender to give the more up-to-date version. Thus, later versions of a capability will diffuse through a network so long as the various nodes of the network hear about them from any source.
The possession of a universal capability implies that one has the right to demand that another processor gives one its current meaning for the capability. Conversely, one has the obligation to provide a meaning that one possesses to anyone who demands it. Leaving aside the problem of how one introduces a universal capability in the first place, the updating from one version to a later one is achieved using remote capabilities. If a remote processor sends a universal capability with a later version than the local one, the local processor makes a remote call to get its version updated. The parameters of this remote call are simply the universal capability and a procedure to update its meaning in the local processor. For example, if the universal capability was a simple serial text file then the updating procedure could have type:
```
Remote((Universal, Union(Vec Char, EndOfFile)) + Void) + Void
```
where the `Union(Vec Char, EndOfFile)` parameter will be called by the remote processor with the successive lines of the new text file as parameter to recreate it in the local processor and update the universal.
This is an unrealistically simple example, since most important objects in Flex are much more highly structured than simple serial text files. For example, the documentation file shown in figure 5 is an important universal capability which is an Edfile. Its updater is much more complicated; it admits of the possibility of transferring other values besides lines of characters. Since the documentation file is structured so that each of its constituent Edfiles is small enough to be displayed in roughly a screenful, the protocol for its transfer can be expressed so that each Edfile is transferred in a single transaction in one block rather than serialising the file as in the previous example. The type of its updater can be expressed as:
```
Remote((Universal, Block) + Void) + Void
```
where
```
Block = Union(PlainText,
Line, Page,
InnerEdfile, .... (with about 15 other possibilities)
....)
```
and
```
PlainText = Vec Char;
Line = Page = Vec Block;
InnerEdfile = (Vec Char, Date, Void + Block)
```
Each Edfile encountered as part of the file is transferred as an InnerEdfile. For example, Mathematical routines is transferred as a triple consisting of the string "Mathematical routines", the date and time at which it was created and a procedure which, if called, will deliver the Block corresponding to the contents of the inner Edfile.
If the local processor already has this file created at this date in its copy of the documentation file, the procedure need never be called. Once again, only that part of the tree which has actually changed need be transferred. The other possibilities in the union given by Block include structures defining how to transfer modules; this might involve transferring program text for compilation to amend the Module.
Just as the universal capability itself is an approximation to some constant, its updating procedure must also be a constant since it must be capable of being called remotely by some independent processor and hence the type and actions of its parameters must be known to both processors. There clearly has to be no possibility of disagreement in the updating process - once a universal capability has been created its updater must have the same properties of approximation to some ideal as the capability itself. In particular, the type of updater will never change.
In principle, one could introduce an arbitrary updater with each new universal capability. However, in order to transfer it to a new processor, the updater would have to be expressed in terms of existing universal capabilities, otherwise it could not be generally transferred. In practice, there are relatively few different kinds of values represented by universal capabilities, procedure modules, types and and various kinds of files like the documentation file above being the most important. Clearly all capabilities whose values are constructed in the same way can have the same updating procedure. Thus, one can start with one universal capability in every processor which allows one to introduce a new universal capability of one of these common kinds to a remote processor. In this way, most of the problem is solved provided that this initiating capability allows one to update its action by introducing new kinds of updaters.
Universal capabilities do not solve the problems involved with preventing simultaneous changes to the same object; somehow or other there has to be a controller for each capability to ensure that the versions are strictly ordered. Usually this controller is a human one, namely the originator (or inheritor) of the capability. However, on a Flex network, the controller can produce a new version from any of the most recent copies regardless of its physical location. Thus there need be no dependence on a single filestore, for example, to be the "master" copy, with all its attendant dangers of data loss.
6. Conclusion
The varieties of capabilities described here have been arrived at by the experience of implementing and using various Flex configurations using networked machines with both local and shared filestores. Once one accepts the notion of capabilities as first class data objects in the mainstore of a computer, then the extension to allow similar objects to exist in filestores and networks is inevitable. The particular classification given here is a consequence of the often conflicting aims of keeping capabilities as long as they are required while trying to preserve consistency in the sense that if one possesses a capability it should be meaningful. Storage limitations will dictate that, in general, this requires some kind of a trace of the capabilities being used with subsequent garbage collection. The classification given here of mainstore, filestore, remote and universal capabilities defines properties of their use and lifetimes so that this task remains manageable.
References
6. P.J.Landin, "The mechanical evaluation of expressions" Computer Journal Vol. 6 No. 4 pp 388 - 328 (Jan 1964)
|
{"Source-Url": "http://www.dtic.mil/get-tr-doc/pdf?AD=ADA183869", "len_cl100k_base": 12648, "olmocr-version": "0.1.50", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 72789, "total-output-tokens": 14486, "length": "2e13", "weborganizer": {"__label__adult": 0.00028967857360839844, "__label__art_design": 0.0004584789276123047, "__label__crime_law": 0.0002586841583251953, "__label__education_jobs": 0.0005927085876464844, "__label__entertainment": 8.90493392944336e-05, "__label__fashion_beauty": 0.00013911724090576172, "__label__finance_business": 0.0003952980041503906, "__label__food_dining": 0.00024020671844482425, "__label__games": 0.0005540847778320312, "__label__hardware": 0.004886627197265625, "__label__health": 0.0002841949462890625, "__label__history": 0.0003018379211425781, "__label__home_hobbies": 0.00013697147369384766, "__label__industrial": 0.0005984306335449219, "__label__literature": 0.0003390312194824219, "__label__politics": 0.0002243518829345703, "__label__religion": 0.0004429817199707031, "__label__science_tech": 0.09368896484375, "__label__social_life": 5.823373794555664e-05, "__label__software": 0.0202484130859375, "__label__software_dev": 0.875, "__label__sports_fitness": 0.0001862049102783203, "__label__transportation": 0.0006060600280761719, "__label__travel": 0.00016701221466064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63894, 0.02792]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63894, 0.72099]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63894, 0.92795]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 75, false], [75, 771, null], [771, 2962, null], [2962, 5873, null], [5873, 8719, null], [8719, 11333, null], [11333, 11562, null], [11562, 13480, null], [13480, 14418, null], [14418, 16410, null], [16410, 19369, null], [19369, 22356, null], [22356, 23258, null], [23258, 26062, null], [26062, 28754, null], [28754, 30061, null], [30061, 32679, null], [32679, 35640, null], [35640, 36820, null], [36820, 39544, null], [39544, 42554, null], [42554, 45515, null], [45515, 48362, null], [48362, 51437, null], [51437, 53329, null], [53329, 56161, null], [56161, 58926, null], [58926, 61454, null], [61454, 63403, null], [63403, 63894, null], [63894, 63894, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 75, true], [75, 771, null], [771, 2962, null], [2962, 5873, null], [5873, 8719, null], [8719, 11333, null], [11333, 11562, null], [11562, 13480, null], [13480, 14418, null], [14418, 16410, null], [16410, 19369, null], [19369, 22356, null], [22356, 23258, null], [23258, 26062, null], [26062, 28754, null], [28754, 30061, null], [30061, 32679, null], [32679, 35640, null], [35640, 36820, null], [36820, 39544, null], [39544, 42554, null], [42554, 45515, null], [45515, 48362, null], [48362, 51437, null], [51437, 53329, null], [53329, 56161, null], [56161, 58926, null], [58926, 61454, null], [61454, 63403, null], [63403, 63894, null], [63894, 63894, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63894, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63894, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63894, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63894, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63894, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63894, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63894, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63894, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63894, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63894, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 75, 3], [75, 771, 4], [771, 2962, 5], [2962, 5873, 6], [5873, 8719, 7], [8719, 11333, 8], [11333, 11562, 9], [11562, 13480, 10], [13480, 14418, 11], [14418, 16410, 12], [16410, 19369, 13], [19369, 22356, 14], [22356, 23258, 15], [23258, 26062, 16], [26062, 28754, 17], [28754, 30061, 18], [30061, 32679, 19], [32679, 35640, 20], [35640, 36820, 21], [36820, 39544, 22], [39544, 42554, 23], [42554, 45515, 24], [45515, 48362, 25], [48362, 51437, 26], [51437, 53329, 27], [53329, 56161, 28], [56161, 58926, 29], [58926, 61454, 30], [61454, 63403, 31], [63403, 63894, 32], [63894, 63894, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63894, 0.00943]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
b69ae1f351e3d3fb44a52f0bd917265337c22d09
|
SPROBES: Enforcing Kernel Code Integrity on the TrustZone Architecture
Xinyang Ge, Hayawardh Vijayakumar, and Trent Jaeger
System and Internet Infrastructure Security Laboratory
The Pennsylvania State University
{xxg113, hvijay, tjaeger}@cse.psu.edu
Abstract—Many smartphones now deploy conventional operating systems, so the rootkit attacks so prevalent on desktop and server systems are now a threat to smartphones. While researchers have advocated using virtualization to detect and prevent attacks on operating systems (e.g., VM introspection and trusted virtual domains), virtualization is not practical on smartphone systems due to the lack of virtualization support and/or the expense of virtualization. Current smartphone processors do have hardware support for running a protected environment, such as the ARM TrustZone extensions, but such hardware does not control the operating system operations sufficiently to enable VM introspection. In particular, a conventional operating system running with TrustZone still retains full control of memory management, which a rootkit can use to prevent traps on sensitive instructions or memory accesses necessary for effective introspection. In this paper, we present SPROBES, a novel primitive that enables introspection of operating systems running on ARM TrustZone hardware. Using SPROBES, an introspection mechanism protected by TrustZone can instrument individual operating system instructions of its choice, receiving an unforgeable trap whenever any SPROBE is executed. The key challenge in designing SPROBES is preventing the rootkit from removing them, but we identify a set of five invariants whose enforcement is sufficient to restrict rootkits to execute only approved, SPROBE-injected kernel code. We implemented a proof-of-concept version of SPROBES for the ARM Fast Models emulator, demonstrating that in Linux kernel 2.6.38, only 12 SPROBES are sufficient to enforce all five of these invariants. With SPROBES we show that it is possible to leverage the limited TrustZone extensions to limit conventional kernel execution to approved code comprehensively.
I. INTRODUCTION
Kernel rootkits pose a serious security threat because an adversary that successfully installs a rootkit can control all user-space processes running on that kernel and can hide from traditional antivirus software. Rootkits are a common problem for conventional operating systems, which are large, complex software systems that may contain many latent vulnerabilities and which run many processes with full privilege that if compromised can easily install a rootkit.
Many smartphones have now adopted conventional operating systems to leverage existing functionality and hardware support. However, this makes rootkits a threat to smartphone systems as well. For example, CVE-2011-1823 [1] was assigned to an integer-overflow bug in a daemon process on Android 3.0. This bug enables a local adversary to gain root privilege, which is sufficient to install a rootkit. Thus, it is likely that rootkits for smartphones will be seen in the wild. The problem we examine in this paper is whether we can leverage available smartphone hardware to restrict and possibly detect rootkits already injected into a running kernel.
To restrict and/or detect kernel rootkits, researchers have proposed utilizing hardware and software mechanisms, such as virtualization and secure coprocessors, that have control over the operating system itself. Using such mechanisms, privileged operations, i.e., those that can affect the actual processor state, are trapped providing the capability (e.g., by inspecting trap events) to monitor the operating system activities. For example, Garfinkel et al. [2] and Chen et al. [3] proposed using virtualization to monitor running operating systems. A hypervisor or VMM has visibility into a running operating system and can control operating system access to hardware resources, such as memory, enabling intrusion detection software to detect possible attacks, restricting the adversary to attacks that avoid detection. Many virtualization-based methods have been proposed to detect possible rootkit behaviors [4], [5], [6], [7], [8], [9]. In addition, researchers have also proposed methods to protect the integrity of kernel using VM introspection. SecVisor [10] and NICKLE [11] are two representative VMM-based solutions that claim to ensure integrity of kernel code over the system lifetime. However, virtualization is currently considered to be too expensive for smartphone systems. Also, coprocessor-based intrusion detection methods have been proposed [12], [13], where a separate hardware device with access to host memory can examine that memory with the aim of detecting rootkit behaviors. A limitation of coprocessor-based solutions is the coprocessor has a limited view of kernel state, in particular it cannot access registers of the host processor. Even the Intel System Management Mode (SMM) layer has been utilized to explore rootkit detection methods [14]. However, SMM environment is very resource limited, making introspection complex, and it is only available on Intel processors.
Based on the observation that over 95% smartphones now are using ARM processors [15], we envision that smartphones need an introspection solution for that hardware. Instead of virtualization extensions, ARM introduced hardware extensions for security called ARM TrustZone technology [16], [17] in ARMv6. Intuitively, TrustZone physically partitions all system resources (e.g., physical memory, peripherals, etc.) into two worlds: a secure world for security-sensitive resources and a normal world for conventional processing. TrustZone protects the secure world resources from the normal world, but the secure world can access resources in the normal world. This hardware separation protects the confidentiality and integrity of any computation in the secure world while permitting the
secure world to view the normal world. One well-known use case of TrustZone is Apple’s Touch ID. Using TrustZone, Apple established a trusted path between a fingerprint scanner and the secure world, which ensures that fingerprint database is protected from the rest of software [18]. Although the secure world can be used as a slave easily, making it a master can be troublesome because each world has full discretion over its own resources, meaning the normal world can use its resource (e.g., modify virtual memory settings) without mediation by the secure world.
In this paper, we utilize the TrustZone extensions to develop SPROBES, a novel instrumentation mechanism that enables the secure world to cause the normal world to trap on any normal world instruction and provide an unforgeable view of the normal world’s processor state. This property of SPROBES helps facilitate monitoring over the normal world, as the secure world can choose the normal world instructions for which it wants to be notified, receive the current processor state of the normal world, and perform its desired monitoring actions before returning control to the normal world. Other than the placement of instrumentation, SPROBES are invisible to the normal world, so no changes are required to the operating system to utilize SPROBES.
We demonstrate SPROBES by developing a methodology for restricting the normal world’s kernel execution to approved kernel code memory. To do this, we define a set of five invariants that when enforced imply that the supervisor mode of the normal world complies with the \( \oplus X \) invariant [19] for an approved set of immutable kernel code pages, even if a rootkit has control of the normal world kernel. That is, adversaries running a rootkit in the normal world cannot inject new code without detection nor modify kernel code pages. As a by-product, SPROBES protect themselves from modification, even from a live rootkit. We show that these invariants can be enforced comprehensively using only 12 SPROBES for the Linux 2.6.38 kernel. We find that each SPROBE hit causes 5611 instructions to be executed using the ARM Fast Models emulator, but that most SPROBES are never hit in normal execution and those that are hit are either those enforced in normal VM introspection or account for less than 2% of the instructions executed.
Contributions. In this paper, we develop a TrustZone-based solution to restrict the executable memory available to live rootkits. In particular, we make the following contributions:
- We present SPROBES, a novel, cross-world instrumentation mechanism that can break on any normal world instruction transparently. The SPROBES mechanism enables the secure world to dynamically break into any normal world kernel routine and specify a trusted handler in the secure world to mediate that routine.
- We show that SPROBES can be used to restrict normal world kernel execution to only approved kernel code pages, even if that kernel is under the control of a rootkit. We identify five invariants that must be enforced and describe a placement strategy for SPROBES to enforce those invariants for ARM TrustZone architectures.
- We evaluate SPROBES by applying the placement strategy to the Linux 2.6.38 kernel running in the normal world. We find that only 12 SPROBES are necessary to enforce the five invariants comprehensively. Further, we find that such monitoring can be efficient, as most SPROBES are not hit in normal operation, and others account for less than 2% of the instructions executed or are typically applied by VM introspection.
With SPROBES we show that it is possible to leverage the limited ARM TrustZone extensions to limit conventional kernel execution to approved code comprehensively. Effectively, SPROBES enable implementation of the breakpoints necessary to perform typical VM introspection without a separate hypervisor layer.
The remainder of this paper is organized as follows. Section II introduces the problem of restricting kernel execution to approved code pages even when that kernel may be infected with a rootkit in terms of five invariants. Section III introduces the TrustZone architecture and explains the challenge of mediating normal world execution from the secure world. Section IV describes the design of SPROBES, a mechanism for setting breakpoints for monitoring normal world operating system execution. We develop a design that utilizes SPROBES to enforce the five invariants necessary to restrict kernel execution to approved code pages in Section V. We describe our implementation for the ARM Cortex-A15 processor emulated by Fast Models 8.1 emulator in Section VI. We evaluate the security guarantees achieved and performance overheads in Section VII. Section VIII presents related work. Finally, we conclude the paper in Section IX.
II. PROBLEM OVERVIEW
Smartphone systems are now deployed on conventional operating systems, such as Linux or Windows, inheriting both their functionality and their threats. One significant threat is that an adversary may be capable of installing a kernel rootkit, giving the adversary full control of the operating system’s execution. Conventional operating systems have large code bases, so latent, exploitable vulnerabilities are likely. Moreover, these systems have many privileged processes that all can install a rootkit trivially if compromised. As mentioned in the Introduction, vulnerabilities in privileged processes in Android systems have been reported.
A goal of system defenses is to restrict the attack options available to an adversary. Operating systems now deploy several defenses, such as W\( \oplus X \) and address space layout randomization [20], [19], [21], to prevent adversaries from using injected code as part of an attack and to increase the difficulty of guessing the location of existing code pages. W\( \oplus X \) limits adversaries to choose to either use a memory page as data, which can be written, or as code, which can be executed. If the defender selects only legitimate code pages as executable, then the adversary can only execute memory on those pages, limiting the adversaries’ code available for running exploits. However, even with this limitation, an adversary can still launch attacks that reuse existing code, based on the idea of return-oriented programming [22] (ROP). ROP attacks leverage control of the stack pointer to execute exploits utilizing available code (i.e., set as executable) to implement
the malicious logic. Address space layout randomization aims to make it impractical for an adversary to guess the correct address of available code, but an adversary may have means to extract the correct addresses, such as through information disclosure attacks [23] to enable launching of effective ROP-style attacks.
For this problem, we assume that the kernel initially is enforcing W⊕X over an approved set of kernel code pages⁴, but that an adversary may still be capable of executing some form of a ROP attack to launch their rootkit. We observe that rootkits could compromise the integrity of kernel execution in the following ways. First, as W⊕X is the common technique used by the kernel to prevent code injection attacks, a rootkit can simply disable the W⊕X protection. For ARM processors, such as those used by most smartphones, this is done by disabling the Write eXecute-Never (WXN) bit. When the WXN bit is set, writable pages are never executable, regardless of how the page table is configured. In this case, we assume that the kernel has the WXN bit set initially, but a rootkit may execute existing code, if available, to disable that protection, enabling the rootkit to inject code in the kernel.
Second, to bypass the W⊕X protection, rootkits can modify page table entries. Suppose the adversary wants to modify a code page, which is initially read-only. Firstly, she alters the permission bits of that page from executable to writable, as they are mutually exclusive. Then, she may write to that page arbitrarily. Then, she changes the permission of that page from writable to executable, as if nothing has ever happened.
Third, an alternative approach from modifying page table entries in place is to duplicate a page table elsewhere and reset the page table base address (e.g., TTBR on ARM and CR3 on x86). Following similar steps of the second approach, i.e., mark a page as writable and revert it to executable once the write is complete, the adversary would be able to inject kernel code.
Fourth, if the adversary can disable the MMU, she can bypass all the existing memory protections (e.g., page permission and W⊕X) as all of them are based on virtual memory system. Note that disabling MMU can limit what a rootkit can do as well. Disabling the MMU reduces the adversary to utilizing physical memory addresses, and she may not know the physical memory addresses of code necessary to continue her ROP attack. However, if the operating system maps the virtual addresses of kernel space to the exactly same physical addresses, the rootkit could benefit from disabling MMU without limiting its capabilities.
Lastly, an adversary may direct the kernel to execute instructions in user space instead. Since an adversary often controls user space (e.g., a root process) prior kernel exploitation, she can simply prepare the malicious instructions there and invoke them from kernel. This is possible because most operating systems (e.g., Linux) map kernel space and user space into one unified virtual address space, and page permissions are set in such a way that the kernel space has a one-way view of the user space.
To summarize, we cast the problem into the following security requirements:
- **S1**: Execution of user space code from the kernel must never be allowed.
- **S2**: W⊕X protection employed by the operating system must always be enabled.
- **S3**: The page table base address must always correspond to a legitimate page table.
- **S4**: Any modification to the page table entry must be mediated and verified.
- **S5**: MMU must be kept enabled to ensure all existing memory protections function properly.
### III. BACKGROUND: TRUSTZONE ARCHITECTURE
TrustZone [16] is a set of security extensions first added to ARMv6 processors. Its goal is to provide a secure, separate environment that protects the confidentiality and integrity of critical computation from conventional computation. TrustZone partitions both hardware and software resources into two worlds - the secure world for assets that are security-sensitive and the normal world for conventional processing.
The TrustZone hardware architecture is illustrated as Fig. 1. The processor core implements the two separate worlds, i.e., the normal world and the secure world. And it can be in one world at a time, meaning the two worlds are running in a time-sliced fashion. To maintain the processor state during the world switch, TrustZone adds a monitor mode, which resides only in the secure world. The software in the monitor mode ensures the state of the world (e.g., registers) that processor is leaving is saved, and the state of the world that processor is switching to is correctly restored. This procedure is similar to context switch between processes except there are some banked registers that are not required to be saved. The mechanisms by which the processor can enter monitor mode are tightly controlled. Interrupts can be configured to be handled in either the normal world or the secure world. In addition, the normal world may proactively execute a Secure Monitor Call (SMC), which is a dedicated instruction that can trigger the entry to monitor mode (i.e., the secure world). For ease of understanding, the SMC instruction is similar to “int 0x80” on Intel x86 and “svc” on ARM in terms of privilege mode switch.

The current world in which the processor runs is determined by the Non-Secure (NS) bit. In addition, almost all the system resources (e.g., memory, peripherals) are tagged with
---
⁴Note that even code pages may have read-only data embedded in them.
their own NS bits, determining the world they belong to. A general access control policy enforced by TrustZone is that the processor is able to access all the resources when running in the secure world, while it can only access normal world resources (i.e., those with the NS bit set) when running in the normal world. For example, memory hardware is partitioned into the two worlds. When the processor is in the normal world, it can only see the physical memory of its own world. After entering the secure world, the processor can see all the physical memory in the system.
Unfortunately, the secure world as provided by the TrustZone architecture does not help much on protecting the kernel running in the normal world from rootkits. Unlike a VMM, the secure world is not more privileged than the normal world regarding the normal world resources. Once a hardware resource is assigned to the normal world, the secure world cannot control its access (e.g., by managing all physical memory, as a VMM would). For instance, the normal world has full privilege over its memory system, meaning it can arbitrarily set its virtual memory environment (i.e., virtual address mappings and page permissions) and access its physical memory without requiring any permission from the secure world. Another example is that interrupts assigned to the normal world are handled locally, without any secure world code being executed. By relinquishing this control to the normal world, rootkits can then tamper with the normal world’s virtual memory environment as stated in Section II without being detected by the secure world.
IV. SPROBES MECHANISM
To enable the secure world to control the execution of normal world events of its choice, we present SPROBES, an instrumentation mechanism that can transparently break on any instruction in the normal world. The control flow of an SPROBE is shown in Fig. 2. When an SPROBE is hit, the secure world immediately takes over the control, switches the context and invokes the specified SPROBE handler. The processor state of the normal world (e.g., register values when SPROBE is hit) is packed as a structure and passed to the invoked handler. Since this parameter cannot be forged by the normal world, the handler obtains a true view of the normal world from it. Finally, the control returns to the location where SPROBE is hit.
To trigger the secure world from a normal world instruction, we rewrite the specified instruction in the normal world as illustrated in Fig. 3. In this example, a secure world process (called a “trustlet”) inserts an SPROBE at the normal world instruction of “mov pc, lr” (e.g., return from a function) by rewriting it to be the SMC instruction. The SMC instruction is the only instruction for triggering entry to the secure world as stated in Section III. Although code pages in the normal world might be write-protected, the secure world may rewrite any normal world instruction without causing an exception because the secure world has a different translation regime. This means all the virtual memory protections (e.g., W⊕X) employed in the normal world are not applicable to the secure world. The secure world is authorized to access all physical memory without intervention of the normal world virtual memory system. Later, when the SPROBE is hit, meaning the program counter reaches the SMC instruction, the processor state will be switched to the secure world immediately. Within the secure world, the SPROBE handler is invoked, which can perform monitoring operations such as checking the state of the normal world kernel prior to restoring the original instruction, i.e., “mov pc, lr”, that was substituted by the SMC instruction. Lastly, the processor exits the secure world and resumes execution starting from the restored instruction.
There are several advantages of the SPROBES mechanism. First, it is independent on the software running in the normal world. All the features used by SPROBES are natively provided by the ARM hardware, such as the SMC instruction and cross-world memory access. Second, an unforgeable state of the normal world can also be extracted from its hardware registers directly when an SPROBE is hit. Third, SPROBES are transparent to the normal world and thus do not require modifications to existing software. The original control flow in the normal world remains unchanged, so the operating system running in the normal world will not notice any overt differences caused by SPROBES. Fourth, SPROBES can break on any instruction in the normal world without any restrictions. This is because SPROBE does not cause any side effects on the normal world. Fifth, other than the SMC instruction, SPROBES are implemented in the secure world, so all of its code and data are isolated from the normal world. This limits the ability of a rootkit to affect any SPROBE execution. All these features combine to make SPROBES a powerful instruction-level instrumentation mechanism on TrustZone architecture.
V. PROTECTING INTEGRITY OF KERNEL CODE
In this section, we propose an SPROBE placement strategy that can block all the approaches by which an adversary could violate the integrity of kernel code. With this placement strategy, we make a strong security guarantee that over the system’s lifetime, any kernel rootkit cannot inject any code or modify approved kernel code. That is, even a rootkit in the normal world kernel is limited to running approved kernel code only. Further, since the SPROBES are inserted into kernel code, this placement strategy is sufficient to protect the SPROBES from modification as well.
Attacker Model. According to the problem stated in Section II, we base our work on the following attacker model. We assume there is at least one exploitable vulnerability in the kernel by which an adversary could hijack the control flow of
the kernel, enabling that adversary to choose the address from which to run kernel code, such as return-oriented programming (ROP) attacks [22]. Defenses have been proposed that aim to reduce the adversaries’ ability to leverage such attacks, such as address space layout randomization [19], [20]. In addition, researchers have proposed methods to counter ROP attacks, such as modifying the compiler to build a kernel without return instructions [24] and using VMM to enforce some degree of kernel-level control flow integrity [25], but these proposals still have some important limitations. Thus, we aim to limit adversaries with an installed rootkit to run only approved kernel code even though they may be capable of launching ROP-style attacks.
**Trust Model.** In this paper, we make the following assumptions. In addition to trusting all the secure world code, we make the assumption that the normal world kernel code is free of rootkits prior to the execution of the first user-space process. That is, we assume that all rootkit threats originate from compromized root processes or any user-space process that has access to a kernel interface. We do not defend against malicious code running in the kernel prior to the first user-space process being initiated. In addition, we assume the use of hardware-based IOMMUs [26] to block malicious DMAs. Lastly, we trust the load-time integrity of kernel image is protected by utilizing technologies such as secure boot [16], [27].
**Placement Strategy Overview.** At a high-level, our strategy is for the secure world to configure the normal world kernel such that the SPROBES can mediate runtime access to memory management. This strategy combines enforcement of high-level management settings for ARM (e.g., keeping W⊕X enabled to enforce S2 and keeping the MMU enabled to enforce S5) with VMM-like breakpoints to control kernel memory access (e.g., verifying page table integrity to enforce S3 and validating page table updates to enforce S4), leveraging an ARM hardware feature to prevent the execution of user-space code from supervisor mode (e.g., to enforce S1). This strategy is implemented in three phases. First, the secure world must write-protect the kernel page table to continue enforcing S1−S5 throughout the kernel runtime (see Enforcing S1−S5 below).
**Pre-boot Configuration.** We require that the secure world has some specific knowledge about the normal world kernel memory in order to establish the enforcement of S1−S5 prior to booting. First, the secure world must know the base address that will be used for the kernel page table necessary to enforce S3. Second, the secure world must know the kernel code pages (see Approved Kernel Code below) in this page table, so it can validate the page table mappings (see Enforcing S3 below for the method), such as checking the correct page permissions necessary to enforce S4. Third, the secure world must write-protect the kernel page table to continue enforcement of S4. Other invariants will be established after booting the kernel.
**Approved Kernel Code.** In addition, the secure world must know that all of code page memory regions for the normal world kernel are approved for execution prior to booting. One issue with this assumption is that many kernels support loadable kernel modules (LKM), which may change the code in the kernel legitimately. However, for smartphones, LKM is not necessary because all the peripherals are known in advance and their drivers can be compiled along with the kernel. The Android 4.3 kernel on Nexus 4 is an example of a kernel where LKM is not supported. An alternative approach is to intercept init_module system call by inserting an SPROBE and record the memory location where module instructions are loaded. In addition, kernel modules may include security-sensitive instructions that may require SPROBE mediation as well. Thus, we assume that LKM is not supported in the normal world kernel and leave the more general problem for future work.
**Boot Configuration.** Once the normal world kernel is booted then further work is necessary to establish invariants S2 and S5. We require that W⊕X is set (S2) and the MMU is enabled (S5) prior to running the first user-space process. These are typically set early in the boot sequence, and SPROBES are placed to mediate access to these values as described below. If they are not set prior to changing the page table base value for the first time, indicating that a new process is running, then the secure world can halt the system. We note that although preventing the execution of user-space code in supervisor mode (S1) is not relevant until the first new process is initiated, we require the operating system to set the Privileged eXecute-
Never (PXN) bit\(^2\) on all user-space pages before actually context switching to the first user-space process.
**Security Guarantees.** Using the above SPROBE placement strategy, we expect to obtain the information necessary to enforce invariants S3 and S4 over the kernel prior to boot, configure enforcement of S2 and S5 prior to running the first user-space process, and enforce S1 (prevent the kernel from running unprivileged code), S3 (enforce correct page table base addresses for the kernel and all processes), and S4 (prevent unauthorized modifications of the page table entries) over the kernel and user-space processes at runtime. We describe our approach to placing SPROBES to achieve these guarantees in the rest of this section.
**Enforcing S2.** As the WXN is a bit in the System Control Register (SCTLR), the rootkit would have to write to this register in order to turn off W⊕X protection. Recall that we assume that the WXN bit is set at initialization time. Therefore, the idea is if we could insert an SPROBE at every kernel instruction that writes to the SCTLR, we can trigger the secure world whenever a rootkit may attempt to turn off the protection. When such an SPROBE is invoked, it simply must block values that attempt to unset WXN to achieve S2. Since ARM has fixed length instructions, finding such instructions in the kernel binary is straightforward, particularly relative to x86.
One issue is that there are multiple control bits in the SCTLR, such as Alignment Check Enable and MMU Enable. This causes false sharing as updating a non-WXN bit in the SCTLR will also hit the SPROBE and trigger the secure world thus bring unnecessary overheads. We evaluate the performance impact of false sharing in Section VII.
**Enforcing S5.** Our solution does not insert additional SPROBES to prevent adversary from disabling the MMU. That is because the MMU Enable bit is in the same register, i.e., the SCTLR, as the WXN bit. All the SPROBES used to protect the WXN bit can also protect the MMU Enable bit. Therefore, S5 is satisfied. Given the fact that most operating systems do not disable MMU after it is turned on, it would be easy for the secure world to detect the presence of a rootkit if she tries to override the MMU Enable bit.
**Enforcing S3.** On ARM processors, the base address of page table is stored in a special register called Translation Table Base Register (TTBR). Similarly to above, by inserting an SPROBE at each instruction that writes to the TTBR, the secure world can fully mediate the operations that switch the page table, providing complete mediation of writes to this register.
Normally, to create separate address spaces for each process, the operating system assigns a different page table to each process. When a process is scheduled on processor, the operating system updates the TTBR with that process’s page table base address. Thus, the secure world needs to be capable of ensuring that only valid page table bases are applied for each context switch. In order to do this, the secure world will have to maintain the integrity of the page tables. When a TTBR is asserted for the first time, the secure world must validate this new page table, ensuring that the addresses used are valid and checking compliance of the permission bits. For example, in Linux the kernel portion of the process’s page table (i.e., addresses above 0xc0000000) must be the same for each process page table and the approved kernel code pages must not be writable. In addition, double mapping (i.e., two virtual pages are mapped to the same physical frame) must not exist in the page table [28], particularly between a code page and a data page, otherwise the attacker can modify kernel code by writing to that data page. By restricting updates of TTBR to only valid page tables, we enforce invariant S3. Note that we only need to validate a page table the first time that we see its TTBR value because of the way we control page table updates to enforce S4 below. Note further that unlike protecting the WXN bit, the SPROBES to protect the TTBR will be hit in a regular manner due to process context switch. As a result, some performance overhead is fundamental to this enforcement.
**Enforcing S4.** Placing SPROBES to prevent the page table from being modified illegally by rootkits is the most challenging part in our solution. Unlike the cases above, which focus on mediating access to a special register, page tables are just normal memory, so consequently there are many usable instructions that can write to them. Inserting SPROBES at all of those instructions (e.g., store) is not practical because of performance overhead. But why do we want to instrument all memory store instructions if what we really want is to monitor updates to page tables, which are only a small portion of the whole address space? Therefore, instead, we apply write-protection over page tables, so that any table updates will generate a page fault. Then, if we insert an SPROBE into the page fault handler, the secure world will be triggered upon every table update. This is essentially how shadow page tables are implemented in virtualization, but we need to implement this mechanism utilizing ARM hardware.
Unlike Intel x86, ARM requires operating system to place its exception vector table at a fixed location, either 0x00000000 or 0xfffff000, determined by Vectors bit in the SCTLR. Normally operating systems use 0xfffff000 as the exception base address because 0x00000000 is interpreted as NULL pointers. Because of this, the rootkit will not be able to re-define another exception vector table to bypass the SPROBE in the page fault handler. It is also worth noting that exception vectors are code rather than data, so they cannot be written by a rootkit without changing the code page permissions, which is what this SPROBE enforces. Each vector contains an instruction (normally a branch instruction), and the instruction is executed once a corresponding exception happens (e.g., page fault). To illustrate, we show an example of exception vector table in assembly code as Listing 1. For this listing, we propose to insert an SPROBE at the start of the function abort_handler, which services page faults. Therefore, given that the adversary can neither re-define a new exception vector table, nor modify the current one, updates to the normal world’s page tables are fully mediated by the secure world.
When the secure world is triggered due to a page fault in the normal world, there are three situations worth discussing, and the flow chart of SPROBE handling is shown as Fig. 4. First, if the page fault is intended such as due to copy-on-write fork, we simply return to the exception handler to let
\(^2\)The PXN bit is a permission bit in page table entries that determines whether a memory region is executable from privileged modes
the kernel deal with this exception. Second, if the page fault is
caused by a legitimate table update like mmap system call,
the secure world would first emulate the behavior of table
update and then return to the faulted instruction as if nothing
happens. By returning to the instruction that causes the page
fault instead of the next instruction in the exception handler,
we make the whole procedure invisible to the operating system
since it does not expect such an exception. Third, if the page
fault is caused by either (1) making a page writable that maps
to a physical frame containing kernel code, or (2) making a
page executable that maps to any other physical frame, the
secure world should block such page table modifications and
trigger rootkit detection mechanisms. Therefore, S^4 is satisfied.
**Enforcing S^1.** Since all page table entries are protected
from modification by enforcing S^4, the PXN bits set prior
to the installation of rootkit are safe from modification and
the secure world can prevent the rootkit from creating new
mappings in which the PXN bits are unset. In addition, S^3 can
reject a page table whose PXN bits are not properly set for
new page tables, even those created by a compromised kernel.
Thus, all the page tables must have the PXN bits set for all
user-space pages, satisfying S^1.
We enforce all five of these invariants by mediating a fixed
set of instruction types. An implicit assumption behind our
design is that the rootkit cannot jump into the middle of an in-
struction to discover unintended sequences. Fortunately, ARM,
like most other RISC machines, does not support misaligned
instruction fetch, defeating this possibility completely.
In summary, by enforcing invariants from S^1 to S^5, we
mediate all the ways that a party, either the legitimate kernel
or a malicious adversary, controls the system’s virtual memory
environment. S^5 ensures the MMU is always on, which serves
as a foundation for the rest of protections. S^3 further limits the
usable page tables to a small set, preventing the adversary from
setting the page table base to an arbitrary value. S^4 protects
the integrity of page table entries, thwarting any attempts to
modify kernel code and thus protecting the S^PROBES from
unauthorized removal. As a supplement, S^1 and S^2 disallow
execution over injected instructions and assure the enforced
mediation is not bypassable. Overall, the five invariants restrict
the adversaries to approved code pages, indirectly protecting
any S^PROBES inserted into these pages for enforcement pur-
poses.
**VI. IMPLEMENTATION**
We implement a prototype of S^PROBES on top of a Cortex-
A15 processor emulated by Fast Models 8.1 emulator [29]
from ARM. Although S^PROBES do not rely on the software
(e.g., kernel) running in the normal world, to have a proof of
concept of our solution, we run Linux 2.6.38 in the normal
world as a case study. We build Linux from source code using
the GNU ARM bare metal toolchain (e.g., arm-none-eabi).
This toolchain is mainly for building applications for the ARM
architecture without any operating system support. To make
the S^PROBES implementation simpler and more efficient, we
extract all the necessary kernel information before hand. For
example, we use *objdump* to inspect the address space layout
of kernel code to identify the locations for S^PROBES.
One key step in S^PROBES implementation is to substitute
the target normal world instructions with SMC instructions.
In most cases, the MMU is enabled in the normal world, so
the secure world can only see its virtual addresses. To access
a given virtual address in the normal world, the secure world
needs first to translate it to the physical address and then create
a corresponding mapping in its own page table. Note that the
physical address space of the two worlds can overlap, so the
page table entry has an NS bit to indicate which world the
physical address is from. To convert a virtual address to a
physical address, the Cortex-A15 architecture has an external
coprocessor (CP15) that can complete such a translation, which
avoids manually walking the page table in the normal world.
In order to enforce W⊕X protection, we disassemble the text section of kernel image and identify all instructions that write to the SCTLR. Then, in the secure world, we hardcode the addresses of those instructions and insert SPROBES after the Linux kernel is loaded in the normal world.
Similarly, to protect the TTBR, we scan the disassembled text section of kernel image and record all instructions that can write to the TTBR. However, ARM processors support more than one page table active (two in maximum) at a time, and one is determined by the TTBR0 while the other is determined TTBR1. The TTBRs are used together to determine addressing for the full address space. Which table is used for what address range is controlled via the Translation Table Base Control Register (TTBCR). In the actual implementation of Linux, by setting the TTBCR to a fixed value, it uses only one page table throughout its lifetime. However, to prevent adversary from enabling the second page table, we still need to insert SPROBES at those instructions that write to the TTBCR.
Protecting the page table requires the secure world to modify page permissions of the normal world. As stated in Section V, the secure world sets the page table memory to be read-only, so that any page table updates cause page faults. The implementation is actually similar to the mechanism to synchronize guest page tables and shadow page tables that are used by the VMM [30], [31]. Since we also insert an SPROBE to the page fault handler, the secure world would be triggered on every page fault. The chain connecting the page table updates to the secure world does not give the normal world software any opportunity to interfere with this function, as none of the normal world instructions can be executed in the middle. This requires us to insert an SPROBE at the very first instruction of the exception handler. There are two benefits by doing so: (1) it ensures no normal world instruction is executed before the secure world gains the control and (2) in those cases where the operating system does not expect such a page fault (e.g., mmap system call), it minimizes the effects on the processor state of the normal world and thus makes state recovery (i.e., restore the register values before the page fault) easier.
VII. EVALUATION
A. Security Analysis
In this section, we evaluate the security of our solution by summarizing how our design has achieved the goal of protecting kernel code integrity for Linux 2.6.38.
For illustration, we list the 12 SPROBES necessary to implement the placement strategy for Linux 2.6.38 in four groups:
- **Type #1**: The 6 SPROBES that protect the SCTLR containing the WXN and MMU Enable bit.
- **Type #2**: The 4 SPROBES that protect the TTBR containing the base address of the page table.
- **Type #3**: An SPROBE that protects the TTBCR to enforce usage of only one TTBR (i.e., TTBR0), as required by Linux.
- **Type #4**: The SPROBE that is inserted at the first instruction of the page fault handler.
At a high level, we demonstrate our design can effectively reduce the adversary to approved kernel code following these steps. We first show that the adversary has to change the memory environment, in order to execute injected or modified code in kernel space. Then, we illustrate how the combination of listed four types of SPROBES enables the secure world to detect all changes to the normal world memory environment. Finally, we claim that no memory environment changes for a malicious purpose can bypass the checks in the secure world.
First, according to the Boot Configuration in Section V, the only possible attack against the integrity of kernel code without tampering with the virtual memory environment is to modify the kernel image file, so that the system would be in a “compromised” state the next time it is loaded into memory. We foil such attacks by utilizing technologies like secure boot [16], [27] as assumed in the Trust Model in Section V. Note that checking the integrity of kernel image file is sufficient to ensure load-time kernel integrity as kernel loading is at the very beginning of a boot sequence, before which no adversary is assumed to have access to the system.
Second, we claim that modifying the virtual memory environment will always be captured by the secure world regardless of its purpose. In essence, a virtual memory environment is uniquely defined by the active page tables as long as the MMU is on3. So, if not considering the cases where the MMU is disabled, modifying the virtual memory environment is just equivalent to modifying the active page tables. To accomplish this, the attacker may either (a) switch to a set of page tables that are under her control or (b) modify page table entries in place. However, (a) would cause the hit of Type #2 and/or Type #3 SPROBES while (b) triggers Type #4 SPROBES since page tables are write-protected. Alternatively, the attacker can simply disable the MMU, accessing physical memory with no restrictions. Similarly, such operation will be trapped to the secure world as well because of Type #1 SPROBES.
Finally, we show that altering the virtual memory environment for a malicious purpose will not bypass the checks in the secure world. To achieve this, we need to draw a clear boundary between legitimate and malicious operations on the memory settings. To begin with, triggering Type #1 SPROBES, either by disabling the MMU or the W⊕X protection, is a clear indication of system compromise in the normal world because generally an operating system (e.g., Linux) will not turn any of them off after they are on. When modifying the active page tables either by (a) or (b), triggering Type #2/#3 or Type #4 SPROBES, we enforce the same permission settings as shown in Fig. 5: consisting of non-executable kernel data, unwritable kernel code, and non-executable user pages. Any attempt to violate this configuration will be regarded as a malicious operation. Further, in order to ensure that physical kernel code frames are protected by managing the permission settings on virtual pages, we ensure that there is a fixed one-to-one mapping between kernel code pages and their corresponding physical frames. In addition, by forbidding double mappings, we rule out the possibilities of modifying a physical code frame through a virtual data page, the combination of which ensures only a set of unmodified physical frames have been executed since system starts, which is our goal in this paper.
3Though the permission settings can be enhanced through bits like SCTLR.WXN.
are executed between each Type #2 S
PROBE hit on each context switch. On average 313,836 instructions S
the most intensive. The result turns out Type #4 S
PROBES incur, we count the number of instructions instead. An S
PROBE hit causes 5611 more instructions (including the original SMC instruction) to be executed in the secure world. Note that the number of instructions is a very coarse-grained measurement as it does not take microarchitectural events into account.
We run Linux 2.6.38 in the normal world with 28 startup processes including 4 daemon processes, 1 interactive process and 23 kernel threads. We run a shell script that invokes the write system call in a loop as the workload. We measure the individual cost for S
PROBES of the four different types. To understand how frequently those S
PROBES are hit in Linux 2.6.38, we use hit frequency, the average number of elapsed instructions between two consecutive hits, as the metric. We list our results in Table I. Both Type #1 and Type #3 S
PROBES are not hit after the booting, which means enforcing S2, S3 and S5 incurs negligible runtime overheads. Type #2 S
PROBES are hit on each context switch. On average 313,836 instructions are executed between each Type #2 S
PROBE hit, contributing to 2% of the instructions executed. We further measure the Type #4 S
PROBES during boot stage when page updates are the most intensive. The result turns out Type #4 S
PROBES are hit in every 22,424 instructions. After the kernel is setup, the hit frequency goes down to every 85,982 instructions. Since smartphone boot-times can be performance-critical, S
PROBES overhead may be an issue. Perhaps restricting the code executed at boot-time to only trusted code is necessary to achieve performance goals. However, this may introduce a window of attack for adversaries. Normal runtime overheads incurred by S
PROBES may be close to acceptable for the protection offered, given that only 2 types of S
PROBES are hit and incur fewer than 10% of the instructions executed. We note that current VM introspection methods trap all page faults as well, so the number of traps would be the same, although the overhead for TrustZone may be higher.
| S
PROBE Type | #1 | #2 | #3 | #4 |
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Hit Frequency</td>
<td>N/A</td>
<td>313,836</td>
<td>N/A</td>
<td>85,982</td>
</tr>
</tbody>
</table>
TABLE I: Hit frequency of different types of S
PROBES
Revisiting the false sharing issue stated in Section V, the fact that Type #1 S
PROBES are never hit after initialization demonstrates that false sharing can be eliminated to an acceptable range in a real implementation.
VIII. RELATED WORK
The main focus of this paper is to enforce kernel code integrity. SecVisor [10] and NICKLE [11] are two VMM-based approaches proposed to protect lifetime kernel code integrity. SecVisor is a tiny hypervisor (e.g., fewer than 2,000 SLoC) that restricts the code running in kernel space. It is achieved by virtualizing physical memory to allow SecVisor to exclusively set hardware memory protection over kernel memory, which is independent on the memory protection in the guest machine. In addition, SecVisor also checks certain kernel states upon mode switch, e.g., return from a system call. NICKLE is implemented as part of a virtual machine monitor which allows only authenticated code to be run. Besides the standard memory for running the operating system, it also maintains a separate physical memory region called shadow memory and stores a duplication of authenticated code in this area. The VMM enforces the guest kernel cannot access the shadow memory so the integrity of code within this region is protected. During runtime, NICKLE transparently routes instruction fetches to this shadow memory and thus blocks any attempt to execute unauthorized instructions.
Petroni et al. proposed a state-based control flow integrity (SBCFI) monitor [32] that detects kernel rootkits by periodically snapshotting the memory in a VM. It relies on the assumption that possible desired states of memory are enumerable. Because of this, in practice, the detection is mainly effective on invariant (e.g., kernel code) or enumerable contents (e.g., some global function pointers). However, a sophisticated attack can remain undetected if it only occurs between two snapshots, which is a limitation of asynchronous detection.
Garfinkel et al. first proposed a VMM-based monitor, called Livewire [2], to mainly protect system invariants. The Livewire system introduced the VM introspection technique and is able to check the integrity of kernel code and verify function pointers with fixed values (e.g., system call table).
Zhang et al. proposed a coprocessor-based kernel monitor [12]. Though it may not have full access to the state of host processor (e.g., registers), by using an additional piece of hardware like PCI cryptographic coprocessor, the solution improves the overall system performance comparing with VMM-based monitoring. Comparably, Petroni et al. further proposed a prototype of coprocessor-based kernel monitor called Copilot [13], which periodically checksums invariant memory regions and sends reports to a remote administration.
station. The autonomous subsystem provided by the coprocessor is similar to TrustZone, although the secure world is much more powerful as detailed in Section III.
Wang et al. built a hardware-assisted tampering detection framework called HyperCheck for VMMs [14]. HyperCheck utilized Intel’s System Management Mode (SMM) to reliably check the state of the VMM and securely communicate to a remote administration server.
IX. Conclusion
In this paper, we have presented the design and implementation of SPROBES, an instrumentation mechanism that enables the secure world to introspect the operating system running in the normal world. To protect the SPROBES, we identify a set of five invariants and present an informal proof of how 12 SPROBES enforce these invariants comprehensively. By enforcing all five of these invariants, we make a security guarantee that only approved kernel code is executed even if the kernel is fully compromised.
X. Acknowledgement
This material is based upon work supported by the National Science Foundation under Grant No. CNS-1117692. Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-13-2-0045 (ARL Cyber Security CRA). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. We are also grateful to many technical discussions with Jason Chiang and Rick Porter and their continuous help on this project.
REFERENCES
|
{"Source-Url": "http://www.cse.psu.edu/~trj1/papers/most14.pdf", "len_cl100k_base": 11144, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33830, "total-output-tokens": 13430, "length": "2e13", "weborganizer": {"__label__adult": 0.0006299018859863281, "__label__art_design": 0.0004987716674804688, "__label__crime_law": 0.001399993896484375, "__label__education_jobs": 0.0005340576171875, "__label__entertainment": 0.00012481212615966797, "__label__fashion_beauty": 0.00026345252990722656, "__label__finance_business": 0.0004742145538330078, "__label__food_dining": 0.0003991127014160156, "__label__games": 0.0013761520385742188, "__label__hardware": 0.01763916015625, "__label__health": 0.0007524490356445312, "__label__history": 0.0003857612609863281, "__label__home_hobbies": 0.00018537044525146484, "__label__industrial": 0.0010118484497070312, "__label__literature": 0.0002815723419189453, "__label__politics": 0.0004224777221679687, "__label__religion": 0.0005593299865722656, "__label__science_tech": 0.282470703125, "__label__social_life": 8.922815322875977e-05, "__label__software": 0.0215301513671875, "__label__software_dev": 0.66748046875, "__label__sports_fitness": 0.0003573894500732422, "__label__transportation": 0.0009889602661132812, "__label__travel": 0.0002002716064453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58548, 0.01875]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58548, 0.40146]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58548, 0.89403]], "google_gemma-3-12b-it_contains_pii": [[0, 5952, false], [5952, 12442, null], [12442, 18085, null], [18085, 23940, null], [23940, 28698, null], [28698, 35578, null], [35578, 39740, null], [39740, 46321, null], [46321, 51516, null], [51516, 58548, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5952, true], [5952, 12442, null], [12442, 18085, null], [18085, 23940, null], [23940, 28698, null], [28698, 35578, null], [35578, 39740, null], [39740, 46321, null], [46321, 51516, null], [51516, 58548, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58548, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58548, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58548, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58548, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58548, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58548, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58548, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58548, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58548, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58548, null]], "pdf_page_numbers": [[0, 5952, 1], [5952, 12442, 2], [12442, 18085, 3], [18085, 23940, 4], [23940, 28698, 5], [28698, 35578, 6], [35578, 39740, 7], [39740, 46321, 8], [46321, 51516, 9], [51516, 58548, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58548, 0.00901]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
7a2d93871fa08131a38f66ca64010c57dd28d464
|
SMARTGen: Exposing Server URLs of Mobile Apps With Selective Symbolic Execution
Chaoshun Zuo
The University of Texas at Dallas
800 W. Campbell RD
Richardson, Texas
chaoshun.zuo@utdallas.edu
Zhiqiang Lin
The University of Texas at Dallas
800 W. Campbell RD
Richardson, Texas
zhiqiang.lin@utdallas.edu
ABSTRACT
Server URLs including domain names, resource path, and query parameters are important to many security applications such as hidden service identification, malicious website detection, and server vulnerability fuzzing. Unlike traditional desktop web apps in which server URLs are often directly visible, the server URLs of mobile apps are often hidden, only being exposed when the corresponding app code gets executed. Therefore, it is important to automatically analyze the mobile app code to expose the server URLs and enable the security applications with them. We have thus developed SMARTGen to feature selective symbolic execution for the purpose of automatically generate server request messages to expose the server URLs by extracting and solving user input constraints in mobile apps. Our evaluation with 5,000 top-ranked mobile apps (each with over one million installs) in Google Play shows that with SMARTGen we are able to reveal 297,780 URLs in total for these apps. We have then submitted all of these exposed URLs to a harmful URL detection service provided by VirusTotal, which further identified 8,634 URLs being harmful. Among them, 2,071 belong to phishing sites, 3,722 malware sites and 3,228 malicious sites (there are 387 overlapped sites between malware and malicious sites).
Keywords
Mobile App, Symbolic Execution, URL Security
1. INTRODUCTION
Over the past several years, we have witnessed a huge increase of the number of mobile devices and mobile apps. As of today, there are billions of mobile users, millions of mobile apps, and hundreds of billions of cumulative mobile app downloads [6]. When talking to a remote service (e.g., user registration, login, password reset, and pushing and pulling data of interest), mobile apps often use URLs which include domain (or host) names, resource path, and request parameters. Unlike traditional desktop web apps in which the server URLs are directly visible to the end users and web browsers (e.g., they are in the address bar), the server URLs of mobile apps are often hidden in the mobile apps, and only when the corresponding app code is executed does it become visible.
By making URLs invisible to app users has certainly made the mobile app more user friendly. For instance, a user does not have to remember the server address and enter the URLs to access the mobile services. Unfortunately, it has also introduced a number of security issues. First, hiding the URLs may allow the servers to collect some private sensitive information (e.g., GPS coordinates and phone address books can be sent to the servers at certain URLs). Second, it may also allow the mobile apps to talk to some unwanted services (e.g., malicious ads sites). Third, it can also provide the illusions to the app developers that their services are secure (security through obscurity), since their server URLs are hidden, none knows and none will attack (or fuzz) them.
Therefore, exposing the server URLs of mobile apps are important to many security applications such as hidden service identification, malicious website detection, and server vulnerability hunting. However, we must analyze the app code to expose them since they are often fragmented and scattered (e.g., a domain name can appear here, but the request parameters or the resource path can be there). Interestingly, as the URLs must be used in the network communication APIs, we can perform a targeted analysis: namely starting from these APIs, we can infer and observe how the parameters are generated and used, to reveal the URLs.
Recently, there are already a group of efforts in the direction of targeted code analysis of mobile apps. Specifically, A3E [13] performs taint-style data flow analysis to build a high level control flow, from which it performs a targeted exploration of app activities, but it does not attempt to solve the path constraints, which can stop the direct exploration of certain path. AppsPlayground [25] and SMV-Hunter [28] recognizes the labels in the UI elements, using them to more intelligently generate user input, but it still does not provide any soundness or completeness guarantees. While symbolic execution has also been explored, existing efforts either focused on capturing and solving the constraints for activity transitions (e.g., ACTEve [10] which unfortunately needs to access app’s source code), or only targeted for malicious app analysis (e.g., IntelliDroid [29]) which has little UI involvement.
To advance the state-of-the-art, we develop SMARTGen, a new targeted, symbolic execution enabled tool to automatically explore the UI of a mobile app with the goal of systematically exposing the server URLs. Similar to many of the existing approaches (e.g., [16, 29]), we also build an extended call graph (ECG) based on the APK code (not the source code) of an app for each entry point of app execution; such a graph captures not only the explicit function calls but also the implicit ones introduced due to the Android framework callbacks. Guided by the ECG, we then traverse the graph to locate whether there is any invocation of network message sending
APIs. If so, we extract the corresponding path constraints that control the execution of these APIs. We then solve these constraints and execute the app with a new dynamic runtime instrumentation technique we developed, to control the app executed in a real smartphone, provide proper input, and explore the possible network message sending activities. Eventually, the execution of each network message sending API will generate a server request message which usually contains the server URLs.
With the revealed URLs and generated server request messages (which show how many parameters are involved and what kind of server interface would process the user request), we can feed them to the existing automatic server vulnerability hunting tools (e.g., sqlmap [1] for SQL injection, or watcher [3] for cross-site-scripting) to fuzz the server, if we have the permission to do so. We can also use them to detect whether there is any hidden services or malicious URLs. In this paper, we focus on harmful URL detection, and we have tested SMARTGen with 5,000 top ranked Android apps (each with more than one million installs) crawled from the Google Play. Our evaluation shows that with SMARTGen we were able to reveal 297,780 URLs in total for these apps, whereas using non symbolic execution tool such as Monkey can only reveal 128,956 URLs. By submitting all of these exposed URLs to a harmful URL detection service at VirusTotal for security analysis, we have obtained 8,634 harmful URLs. Among the 297,780 reported URLs, 83% of them are the first time submitted to VirusTotal.
In summary, we make the following contributions:
- We propose selective symbolic execution, a technique to solve input-related constraints for targeted mobile app execution. We also develop an efficient runtime instrumentation technique to dynamically insert analysis code into an executed app in real mobile devices using API hooking and Java reflection.
- We have implemented all of the involved techniques and built a novel system, SMARTGen, to automatically generate server request messages and expose server URLs.
- We have tested SMARTGen with 5,000 top-ranked Android apps and exposed 297,780 URLs in total. We found these top ranked apps will actually talk to 8,634 malicious and unwanted web services, according to the harmful detection result from VirusTotal.
2. BACKGROUND AND RELATED WORK
2.1 Objectives
The goal of this work is to expose the server interface (namely the URLs) of mobile apps by generating server request messages from the app. While there are many ways to do so, we wish to have an approach that is:
- **Automated.** The system should not involve any manual intervention, such as manual installation and manual launching, and instead everything should be automated.
- **Scalable.** Since we aim to expose the URLs for apps in popular app stores such as Google Play, we need an approach that can perform a large scale study. For instance, it should not take too much time to generate a server request message.
- **Systematic.** The path exploration should be systematic. We should not blindly click a button or randomly generate an input to explore the app activities. Instead, all the targeted paths containing the network message sending APIs should be explored.
2.2 A Running Example
To illustrate the problem clearly, we use a popular app ShopClues as a running example. This app has between 10 million and 50 million installs according to the statistics in Google Play. There are many activities inside this app, and for simplicity we just use its password reset activity, as shown in Figure 1, to describe how we would have performed our analysis.
In particular, if we aim to reveal whether the password reset interface at the server side of ShopClues contains any security vulnerabilities (e.g., SQL injection), we need to enter a valid email address in the corresponding EditText box, and then click the SUBMIT button, which will automatically generate a sample password reset request message as shown in Figure 2. With this message, the server interface (e.g., the host name, the query parameter) of password reset is clearly exposed. Next, if we have the permissions from the service provider to perform penetration testing, we may apply the standard server vulnerability fuzzing tools such as sqlmap [1] to automatically mutate this request message (e.g., adding some SQL commands such as "and l=1" to the request fields and analyze the response message to check whether the SQL commands get executed) to determine whether the server contains any testable security vulnerabilities.
However, it is non-trivial to generate a sample request message to expose the URLs as shown in Figure 2. Specifically, it requires a valid input in the EditText box, as shown in the sample decompiled code in Figure 3 for the SUBMIT onClick event. More specifically, to really trigger the password reset request, the user input of EditText has to pass the check of none empty (line 8) and match with an email address format (line 33), while the app needs to maintain a

network connection (line 16). Without a correct email address and network connection, it is impossible to expose the password reset server URL.
While the path constraint appears to be a bit simple in our running example, there are other sophisticated ones, such as when the contents of two EditText need to be equivalent (e.g., when registering an account, the confirmed email address needs to be equivalent to the one entered first), when the age needs to be greater than 18, when a zip code needs to be a five digit sequence (a phone number may also have similar checks), when a file name extension needs to match some particular pattern (e.g., .jpg), etc. We thus need a systematic approach to solve these constraints and expose the server request messages including the URLs.
2.3 Related Work
Static Analysis. Static analysis is often scalable since it does not have to execute the app. However, we have to exclude those purely static analysis systems. This is because what we need is a concrete request message (which are often the seeds for a fuzzing tool such as user_email and key as shown in our running example). Their values are also context specific. e.g., key needs to be d12121c70dda5edfgd1df6633fd360. How to identify these values statically is also a challenge.
• Field Recognition and Value Generation. When sending a request message to a server, there are often several fields such as user_email and key as shown in our running example. Their values are also context specific. E.g., key needs to be d12121c70dda5edfgd1df6633fd360. How to identify these values statically is also a challenge.
• Data Format Recognition. The app also needs to package the user input in a request message using certain format, such as using json as in our running example or other formats such as xml. We must also determine them when statically generating the request messages.
Dynamic Analysis. We can avoid solving all the static analysis challenges if we can execute the app directly and use dynamic analysis. Recently, there are a set of dynamic app testing tools such as Monkey [7] that can automatically execute and navigate an app when given only a mobile app binary, or Robotium [4], a testing framework for Android apps that is able to interact with UI elements of an app such as menus and text boxes.
There are also more advanced systems beyond just simply interacting with the UI elements. AppsPlayground [25] and SMV-Hunter [28] recognize the UI elements and generate text inputs in a more intelligent way. A3E [13] performs a targeted exploration of mobile apps guided by a taint-like static data flow analysis. DynoDroid [21] instruments the Android framework and uses the debugging interface (i.e., the adb) to monitor the UI interaction, and guide the generation of UI events for app testing. PUMA [18] provides a programmable interface for large scale dynamic app analysis. DECAF [20] and VanarSena [27] navigate various activities of Windows phone apps and seek to detect ads, flaws, or debug crashes. Brahmastra [14] efficiently executes third party components of an app to expose its potential harmful behavior such as private data collection via binary rewriting.
However, we still cannot directly use them for our purpose. Specifically, each system is designed for a particular application scenario with different goals: A3E for bug detection, SMV-Hunter for SSL/TLS man-in-the-middle vulnerability identification, DECAF for ads flaw issue (recent work also applied AppsPlayground for this detection [26]), VanarSena for crash debugging, and Brahmastra for targeted vulnerability (e.g., privacy leakage or access token hijacking) identification in 3rd party components of the local app. None of them focused on the remote server request message generation.
More importantly, aside from A3E, which runs in real devices, all of these systems run in an emulator, which often has several limitations. For instance, emulator lacks physical sensors, and it cannot provide a high fidelity environment for testing the app. Second, emulation is slow and it often takes a long time to boot and restart, and sometimes is also unstable. Therefore, we would like to directly execute the testing apps in real phones. While A3E uses the real phone, the exploration of the app activity can fail since it does not attempt to solve any constraints for more targeted execution.
Symbolic Execution. Being a systematic path exploration technique, symbolic execution has been widely used in many security applications (e.g., vulnerability identification [15], malware analysis [24], and exploit generation [12]). Recently, there have also been efforts to apply symbolic execution to the analysis of mobile apps for various applications, such as app testing in general [23],
Figure 3: The decompiled code of the onClick event handler for the password reset request in ShopClues.
```
1 package com.shopclues;
2 class y implements View$OnClickListener {
3 EditText b;
4 String v0 = this.b.getText().toString().trim();
5 if(v0.equalsIgnoreCase("")) {
6 Toast.makeText( this.a, "Email Id should not be empty", 1).show();
7 } else if(al.a(v0)) {
8 Toast.makeText( this.a, "The email entered is not a valid email", 1).show();
9 } else if(al.b(this.a)) {
10 this.a.c = new ac(this.a, v0);
11 } else {
12 Toast.makeText( this.a, "Please check your internet connection", 1).show();
13 }
14 return v0;
15 }
16
23 package com.shopclues.utils;
24 public class al {
25 public static boolean a(String arg1) {
26 boolean v0;
27 if(arg1 == null) {
28 return false;
29 } else if(Patterns.EMAIL_ADDRESS.matcher(((CharSequence)arg1)).matches()) {
30 v0 = arg1;
31 } else {
32 v0 = d12121c70dda5edfgd1df6633fd360;
33 }
34 return v0;
35 }
36 }
37 }
```
path exploration [10], and malware analysis [29]. However, some of them require access to app source code, which is impractical in our application, and only IntelliDroid [29] directly works on app binary (Java byte code, essentially) during the symbolic execution. While we may directly use IntelliDroid [29] to solve our problem, it is not suitable after examining its detailed design as well as the corresponding source code. Specifically, IntelliDroid does not attempt to perform any UI analysis since malware tends to have significantly fewer UI elements in its interface compared to normal apps. Consequently, IntelliDroid does not have to capture the constraints from UI elements. Second, IntelliDroid does not precisely inject an event (e.g., a particular button click). Instead, the events injected by IntelliDroid are at the system boundary, and an injected event may not be accurately delivered to the target app. Third, IntelliDroid requires the instrumentation of the Android framework, but such instrumentation needs to run in an emulator, meaning certain app behavior may not get exposed. Therefore, we have to design our own symbolic execution and leverage the existing efforts, including those dynamic analyses, to automatically generate server request messages.
3. OVERVIEW
Scope and Assumptions. We focus on Android platform, and analyze the apps that use HTTP/HTTPS protocols. Note that according to our evaluation, all of the tested mobile apps use at least HTTP/HTTPS protocol once (there may be more than one protocols in a given app). We assume the app is not obfuscated and can be analyzed by Soot [2], a general Android APK analysis framework. Also, we primarily focus on string constraints since they are often user input related. Other non-linear, non-solvable constraints by standard solvers are out of our scope.
Challenges. While the use of dynamic analysis and symbolic execution has avoided solving many practical challenges such as recognizing the protocol fields and automatically generating the request messages, we still encounter a number of other challenges:
- **How to instrument the analysis code.** When given a mobile app, we need to analyze its UI for each activity, provide a proper input (such as a valid email address in our running example), and inject a corresponding event (such as the SUBMIT button click) to the app, to trigger the server request messages. These analysis behaviors are often context sensitive, and only after the activities are created can we trigger them. Therefore, we have to instrument the original app with context-sensitive analysis code.
- **How to extract the path constraints.** Not all the app execution paths are related to our server request message generation, and instead we are only interested in the path constraints that are related to the final message sending events. Therefore, we have to identify the invocation of network message sending APIs, their path constraints that are controlled by the input, and then solve them to generate the proper input. Also, the input to SMARTGEN is just the APK, we have to analyze the APK file to extract the constraints.
- **How to explore the app activities.** An app can have many activities (i.e., many single screens atop which containing various UI elements). The execution of one activity often determines the execution of others. At a given activity, we need a strategy to explore others (e.g., a depth-first search or breadth-first search). This also implies we need to know the follow-up activities at a given activity. However, this is also non-trivial since it requires sophisticated code analysis to resolve the target activities.
Solutions. To address the above challenges, we have developed the following corresponding solutions:
- **Dynamically instrumenting the apps in real phones.** While rewriting the Android system code including both the Java and native code is a viable approach to inserting the analysis code into a target app, this approach will introduce a system-wide change to all the apps, which may cause unstable behavior. Therefore, we propose a new approach to dynamically instrument the analysis code into the targeted app. At a high level, our approach uses API hooking to intercept the app execution flow dynamically, and then perform an in-context analysis and use Java reflection to manipulate the UI elements.
- **Extracting the path constraints of interest.** The execution of mobile apps are event driven, and we have to focus on the code that is of our interest. To this end, we first build an extended call graph (ECG) for the app that connects not only the explicit edges but also those implicit ones such as the call-backs. Starting from the network message sending APIs, we backward traverse the ECG, collect the path constraints, and meanwhile correlate the constraints with the user input if there is any. Then, we invoke a standard solver to solve the constraints. When the activity involving network message sending event is created, we initialize the UI elements with the proper input provided by the solver.
- **Exploring the app activities using DFS.** We need to explore the app activities as many as possible and in a systematic way. A given activity can trigger several other activities (e.g., based on different buttons a user has clicked). While we can use a breadth-first search (BFS) to explore all possible activities, we prefer a depth-first search (DFS) since our analysis has already reached a given activity and we can keep exploring a next layer activity further and then come back to explore the rest same layer activities recursively. Meanwhile, by hooking the onCreate event of a given activity, we can analyze all of its UI elements to determine whether there is any other next-layer activities associated with the current one and use this knowledge to guide our DFS activity exploration.
Overview. An overview of SMARTGEN is presented in Figure 4. There are three phases of analysis and five key components inside SMARTGEN.
- **Static Analysis.** The first phase of the analysis is to build the ECG, which contains all the possible function call transfer edges inferred statically. This is achieved by our Building ECG component, which takes the APK as input and produces the ECG as output.
- **Selective Symbolic Execution.** In our selective symbolic execution phase, the second and third components of SMARTGEN will extract the path constraints of interest based on the ECG and then solve the constraints if there is any by a constraint solver. The output in this phase will be the proper input value for each involved UI elements in each possible activity.
4. DETAILED DESIGN
4.1 Building ECG
The goal of SMARTGEN is to generate the server request messages to expose the server URLs for a given app. Therefore, we should focus on the code path that finally invokes the targeted network message sending APIs. As such, we need to first build an extended call graph (ECG) of the Android app, which covers not only the explicit function call edges but also the implicit edges introduced by the call-back functions in Android framework, and then identify the code of our interest based on the ECG.
Since the input to SMARTGEN is just the APK, we should convert the APK into some intermediate representation (IR) suitable for our analysis. To this end, we use Soot [2], an Android app analysis framework that takes the APK as input and is able to perform various static analyses including call-graph construction, points-to analysis, def-use chains, and even taint analysis [17] in combination with FlowDroid [11]. It is thus a perfect framework for our ECG construction.
However, the call graph constructed directly from Soot IR will miss the edges that implicitly call the Android framework APIs. For instance, a thread start and threat run does not have an explicit call relation, but the Android framework uses a callback mechanism to ensure that the execution first starts from the thread. start and then thread.run. Therefore, we need to add these implicit calls. How to systematically identify all of them in the Android framework is another challenge. Fortunately, EdgeMiner [16] has been designed to exactly solve this problem. Specifically, EdgeMiner is able to analyze the entire Android framework to automatically generate API summaries that describe the implicit control flow transitions such as the callback relation of thread.start and thread.run pair in the Android framework, and we hence directly use the summary result to connect the implicit edges.
To build the ECG, we scan the primary IR (namely the Jimple IR) produced by Soot, and take each event handler (e.g., onCreate, onResume, onClick, onTextChanged, etc.) as a starting point, recursively add the callee edges if there is any, including those implicit ones guided by the summary produced by EdgeMiner. The output will be a set of ECGs, each starting from the event handler functions, since there are multiple such functions.
4.2 Extracting the Path Constraints
After we have built the ECG, then we traverse each ECG to determine whether there is an invocation of a network message sending API. If so, such an ECG is of our interest, and we then build a control flow path (according to the Soot IR) from the entry point of the ECG to the invocation point, from which to extract the path constraints.
The Targeted APIs. We focus on two sets of network message sending APIs. One is those provided by Android framework (e.g., HttpClient.execute), and the other is those low level Socket APIs (e.g., Socket.getOutputStream). With these functions as target, we traverse each ECG and identify the call path that invokes them. When these APIs get called, they will directly perform Internet connections to remote servers, which will then generate the desired request messages with the exposed URLs.
Taint Analysis. However, the path constraints of our focus are often user input related, and we have to correlate them with the input entered via the UI elements. To this end, we have to taint the inputs from the UI elements and track their propagations to resolve their proper values. There are already public available tools such as FlowDroid [11] for the taint analysis and also the Soot framework supports the integration with FlowDroid, and we thus design our taint analysis atop FlowDroid using Soot. Since taint analysis is a well-established area, below we just briefly describe how we customize FlowDroid’s taint analysis for our purpose.
- **Taint Sources.** The taint sources in our analysis are those user input related UI elements such as EditText (an editable text field as showning in our running example), Check Box, Radio Button, Toggle Button, Spinner (a dropdown list), etc. We thus assign a unique index tag for each of these UI elements as the taint tag and propagate the tag when necessary.
- **Taint Propagation.** Since we use the FlowDroid taint analysis framework, we do not have to customize the taint propagation rules. At a high level, a taint tag will be propagated based on the direct data flow propagations according to the Soot IR (therefore those implicit taint propagation will also be out of our focus).
- **Taint Sinks.** The taint sinks are the data use point of the tainted variables at the if-stmt in Soot IR (note that a
loop statement is implemented using if and goto in the Soot IR, and also the functions that performs comparisons (e.g., the string comparison APIs). We extract the constraints at each such sink.
Extracting the Path Constraints. Starting from the entry point (e.g., an event handler function f1) of the ECG that eventually calls the network message sending APIs (e.g., function f2), we iterate the Soot IR from the control flow path from f1 to f2, perform the above taint analysis, and extract the path constraints at the encountered taint sinks. More specifically, if there is a taint source (based on the semantics of the IR), then we define the taint tag for the corresponding source (e.g., at line 7 in our running example in Figure 3 where v6 is defined, we will define a taint tag, e.g., t6, for v6); if there is a taint propagation, we propagate the taint as well; if there is a taint sink, then we extract the constraints based on the semantics of the if-stmt or the comparison APIs that use the tainted variables (e.g., at line 8, we will extract the taint tag t8 with a constraint "t8.equalsIgnoreCase("""); not true".
Our taint analysis is inter-procedural. Whenever there is a function call, e.g., f4, is called along the path from f1 to f2, we will iterate the IR of f4. If any of the arguments of f4 uses the tainted variables, or a global or heap variable is defined outside of f4 and it is also tainted (recall that Soot supports the def-use chain analysis), we will extract the path constraints concerning the return value computation if the return value of f4 is used as a path condition in the caller. In cases where a tainted variable (e.g., t9) is defined outside f4, we will find the definition of t9, and solve the constraint for t9 if there is any.
4.3 Solving the Constraints
Having extracted the path constraints, we have to solve them and provide concrete proper input such that the final network sending APIs can be invoked. To this end, we use Z3-str [30] with the recent regular expression extension, an open source string solver based on Z3 [9], to solve the constraints we have collected. Interestingly, many of the string APIs used in Android has a corresponding Z3-str API (e.g., length, contains, matches, startsWith, endsWith). To make Z3 solver understand the extracted constraints, we have to translate them into the corresponding Z3-str APIs. After the API translation, the constraints can then be recognized by Z3-str, and we just invoke it to solve the constraints. This is a very standard procedure, and we just associate the resolved values for each constraint and store them in a configuration file. Later when the corresponding activities are executed, we load the configuration file and use these values to initialize the corresponding UI elements.
4.4 Runtime Instrumentation
Next, we have to really execute the app to concretely generate the network request messages with the proper input that has been resolved by our selective symbolic execution. The proper input has to be provided only when the activity is created (i.e., it is context-sensitive). Therefore, we have to instrument the app with our analysis code and execute them altogether.
State-of-the-art has three standard approaches to instrument the analysis code into the real app execution: (1) system code rewriting, (2) repackaging the app with analysis code, and (3) using the Android system tools (e.g., adb). However, we found these approaches are inefficient for our large scale study. More specifically, if we rewrite the system code including the Android framework (as used by IntelliJDroid [29]), we have to execute the modified system code in an emulator. Similarly when using adb to supervise the UI elements (as in DynoDroid [21]), it also requires to run in the emulator since the ViewServer, the critical component for recognizing each UI element by adb, is often turned off in real device. While repackaging seems to be the simplest, the repacked code may not be executed due to the integrity check by the app itself if there is any.
Therefore, to advance the state-of-the-art, we propose a new dynamic instrumentation approach for Android app analysis running in real phones. The key idea is to use API hooking to intercept the control flow of an app execution, insert the analysis code and then invoke an analysis thread to perform an in-context analysis and provide the app with the solved proper input by using Java reflection [22]. Note that Java reflection provides a set of functions that allow us to examine or modify the classes, interfaces, fields and methods of an app at runtime in the Java virtual machine (VM). We can hence use this feature to access and manipulate the UI elements at runtime while the app is executing in the Dalvik VM.
How the analysis code is instrumented and executed in the app is illustrated in Figure 5. In particular, we divide all of the app executed code into three categories and describe how they will be executed below:
Original App Code. It still runs as normal in the main execution thread. Only when the event is of our interest such as activity onCreate or onResume get executed, the control flow will transfer to the instrumentation code after the execution of the event handler, and this control flow transferring is achieved through the well-known API hooking technique [19].
Instrumentation Code. Once our API hooking intercepts an API of our interest (e.g., onCreate), it invokes the instrumentation code, which will still be executed in the app main execution thread.
Based on the current execution context, it injects the analysis code (via `insert_analysis_code`) into the analysis thread, and then starts the execution of the analysis thread. At this moment, there will be at least two execution threads: the main thread, and the analysis thread. The control flow of the main thread will still go back to the original app execution, and the analysis thread will start the execution after the instrumentation finishes.
### Analysis Code
The analysis code performs an in-context analysis. As illustrated in Figure 5, our analysis code will first perform UIAnalysis, which is built atop Robotium [4], a library that allows us to retrieve the UI elements at runtime without any prior knowledge of the app. Based on the result from our UIAnalysis, then we get the UI elements of our interest such as the SUBMIT button. Next, it uses Java reflection to retrieve and manipulate the field and method associated with the corresponding UI elements. To choose which UI elements to manipulate is decided by our earlier selective symbolic execution. Specifically, our earlier analysis has determined which event handler we have to execute, such as the SUBMIT button `onClick`. Therefore, our analysis code will use the Java reflection to query the involved UI elements in `onClick` handler, and then setup their proper input values based on the constraints we have already solved, and invoke the `trigger` method to execute the `onClick` method in the analysis code.
#### 4.5 Request Message Generation
With all the building blocks enabled, we next describe how we navigate various app activities and finally generate the desired request messages to expose the URLs. An Android app can have many activities, and they will only be executed when the corresponding events are generated. The first activity will be executed automatically when the app is started. When a given activity is executed, we hook its `onCreate` event, and then we are able to inspect all the UI element of the current activity. Based on the UI element, we are able to determine all other activities that can be invoked from the current activity. As described earlier, we use a DFS strategy to execute the activity. A request message will be generated after the corresponding network message sending API is executed. We then log this message and extract its URLs if there is any as our final output.
### 5. EVALUATION
We have implemented SMARTGEN atop Android 4.2 platform. In particular, we implement our ECG construction using the Soot [2] framework and taint analysis using Flow-Droid [11]. We use Z3-Str [30] to solve the path constraints and implement our dynamic runtime instrumentation with the v2.7 Xposed [8] framework. In total, we wrote about 7,000 lines of Java code and 1,000 lines of Python code atop these open source frameworks.
In this section, we present the evaluation result of SMARTGEN. During the evaluation, we aim to answer the following questions as we have set up in our design objectives in §2.1:
- **How Automated?** Can SMARTGEN be executed without any human intervention?
- **How Scalable?** Can SMARTGEN handle a large-volume dataset? How fast is SMARTGEN in processing each app?
- **How Systematic?** Does the selective symbolic execution in SMARTGEN really encounter many constraints? How significant is this component in terms of the contribution to the number of messages generated and URLs exposed?
We first describe how we set up the experiment in §5.1, and then present the detailed result in §5.2. Finally, we present how to use the exposed URLs in a harmful URL detection study in §5.3.
#### 5.1 Experiment Setup
**Dataset Collection.** The inputs to SMARTGEN are Android apps. Since there are millions of Android apps, we cannot test all of them. We thus decided to crawl the top 5,000 (in terms of number of installs) ranked apps from the Google Play. To this end, we first used the Scrapy framework [5] to crawl all meta-data of an app (including the number of installs and the category) from Google Play, which contained the meta-data for over 1.5 million apps as of March 2016. Then we ranked the apps based on the number of installs, and crawled the APK of each app starting from the top-ranked one. After downloading an app, we checked whether it has the Internet permission by looking at its `AndroidManifest.xml` file. If it did not have any Internet permission, we excluded this app from our data set. We kept crawling until our data set reached 5,000 apps. During the crawling, we discarded 219 apps that did not have the Internet permission in their manifest files, such as `Shake Calc` (a calculator) and `aCalendar` (an Android calendar), as well as 473 apps that either do not run in the ARM architecture (apps utilizing x86) or that could not be downloaded because of the region restrictions imposed by Google Play. The last app we downloaded had over one million installs.
**Environment Configuration.** SMARTGEN contains three phases of analyses: static analysis, selective symbolic execution, and dynamic analysis. The first two phases were executed in an Ubuntu 14.04 server machine running an Intel Xeon CPUs with 16 cores and 2.393GHz and 24GB memory. The last phase was executed in a Samsung Galaxy S III phone running Android 4.2. To make the static analysis and symbolic execution run faster, we started 10 threads to perform these analyses in our server. After the analysis finished, each target app was then automatically pushed to the Galaxy phone, along with the solved constraints, which are stored in the format as configuration files. The finally generated request messages for each app were collected in a man-in-the-middle proxy. To gather the HTTPS messages, we installed a root certificate in the phone.
#### 5.2 Detailed Result
**Overall Performance.** The overall statistics on how each phase of our analysis was performed is presented in Table 1. We can see that these 5,000 apps consumed 128.2 GB disk space. The first two phases of our analysis (i.e., static analysis and symbolic execution) took 90,143 seconds (i.e., 25.04 hours) in total, and each app took 18.03 seconds on average. During this analysis, it identified 147,327 calls to the targeted APIs, extracted 47,602 constraints.
<table>
<thead>
<tr>
<th>Item</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td># Apps</td>
<td>5,000</td>
</tr>
<tr>
<td>Size of the Dataset (G-bytes)</td>
<td>128.2GB</td>
</tr>
<tr>
<td>Time of the first two phases analyses (s)</td>
<td>90,143</td>
</tr>
<tr>
<td># Targeted API Calls</td>
<td>147,327</td>
</tr>
<tr>
<td># Constraints</td>
<td>47,602</td>
</tr>
<tr>
<td># UI Configuration files generated</td>
<td>25,030</td>
</tr>
<tr>
<td>Time of Dynamic Analysis (s)</td>
<td>18.03</td>
</tr>
<tr>
<td># Request Messages</td>
<td>257,755</td>
</tr>
<tr>
<td># Exposed URLs</td>
<td>297,780</td>
</tr>
<tr>
<td># Unique Domains</td>
<td>18,193</td>
</tr>
<tr>
<td>Logged Message Size (G-bytes)</td>
<td>24.0</td>
</tr>
</tbody>
</table>
**Table 1: Summary of the Performance of SMARTGEN.**
and generated 25,030 UI configuration files based on the solved constraints.
Meanwhile, the details of the extracted string constraints are presented in Table 2. (Note that we also encountered other integer constraints, such as when a value needs to be greater than 18; the details of these constraints are not presented here). We notice that, interestingly, there are many “Not null” constraints. This is actually because during an app execution, NullPointerException may cause crashes and developers (or the system code) think it very often. Meanwhile, to validate whether a UI item contains a user input, we noticed developers also often use a String length constraint (to make sure it is not 0). Some apps also use String length to validate phone number input. Also, we found some apps just use String contains with “@” to validate an email address input, and some other apps use sophisticated regular expression (e.g., Matcher matches) for the matching.
With the solved constraints, we then performed dynamic analysis on each app in our Galaxy phone. In total, it took 486, 446 seconds (i.e., 135 hours) to execute these 5,000 apps (each app needed 97 seconds on average). Note that among the 97 seconds, the installation and uninstallation time is on average 17 seconds. However, if we execute an app inside an emulator, the installation time for an app with 25M will take about 60 seconds. That is one of the reasons why we designed SMARTGEN to use a real phone. During the dynamic analysis, we observed 257,755 request messages (55.7% uses HTTP protocol, and 44.3% uses HTTPS) generated by the tested apps, and in total 297,780 URLs in both request and response messages. Among them, there are 18,193 unique domains in these URLs. The final size for all the traced request and response messages collected at our proxy is 24.0 GB.
Comparison with Monkey. To understand the contribution of our selective symbolic execution, we compare SMARTGEN with a widely used dynamic analysis tool Monkey [7]. At a high level, Monkey is a program, executed in an emulator or a real phone, which can generate pseudo-random streams of user events, such as clicks, touches, or gestures, as well as a number of system-level events, all for app testing. For a fair comparison, we also run Monkey in our real Galaxy phone to test each of our app, and configure Monkey to generate 2,000 events under the time interval of 100 milliseconds. That is, for each app, Monkey will take up to 200 seconds to just test it.
In total it took 1,083,530 seconds (i.e., 301 hours) to process these apps. Each app took on average 216.7 seconds (among them around 200 seconds for the testing, and 17 seconds for the installation and uninstallation). We have to also note that it is not 100% automated while using Monkey for the testing. This is because Monkey randomly sends events to the system without specifying the recipients. These random inputs may click system buttons, which may lock the screen, turn off the network connection, and even shutdown the phone. Therefore, we disabled the screen locking functionality, and also developed a daemon program to constantly check the Internet connectivity and turn on the networking if necessary, but we cannot disable the phone power off event and must manually power on the phone. This is the only event Monkey cannot handle automatically and we encountered 17 phone power off events. We excluded the power-off and restart time in our evaluation in this case. For all these tested apps, with Monkey they generated 79,778 request messages, with 6,384 domain names. The total size of the logged message is 12.8 GB.
A detailed comparison between SMARTGEN and Monkey for these tested apps is presented in Fig. 6. We compare them based on their execution time, the total number of request messages generated, the total number of domains in the requested message, and finally the total size of the request message. We can see that SMARTGEN only took 53%, i.e., (90,143 + 486,446)/1,083,530, of the execution time of Monkey, but it generates 3.2X request messages, 2.3X unique URLs, 1.9X unique domains, and 1.9X logged message size, compared to the result from Monkey.
5.3 Harmful URL Detection
Having so many URLs from the top 5,000 mobile apps, we are then interested in whether there is any harmful URLs. To this end, we submitted all of the exposed 297,780 URLs to harmful URL detection service at VirusTotal, which then further identified 8,634 unique harmful URLs. Note that VirusTotal has integrated 68 malicious URL scanners (as time of this experiment), and each submitted brand new URL is analyzed by all of the scanners. The scanners that have identified at least one harmful URLs are reported in the first column of Table 3, followed by the number of Phishing sites, the number of malware (i.e., the URL is identified as malware), and the number of malicious sites from the 2nd to the 4th columns, respectively. The last column reports the number of unique harmful URLs identified by the corresponding scanners, and the last row reports the number of unique URLs in each category. The total number of unique malicious URL is 8,634 because there are 387 sites being detected both malware and malicious. Also, note that one harmful URL can be identified by several engines. That is, there are some overlapped URLs in the last column of Table 3. To clearly show those overlaps, we present the number of harmful URLs and the number of engines that recognize those harmful URLs in Table 4. Interestingly, we can see that most harmful URLs are detected by just one of the engines, and only one URL is detected simultaneously by 8 engines. Based on the timestamp of the queried result from VirusTotal, we notice that 83% of the URLs are the first time
Table 3: Statistics of Harmful URLs Detected by Each Engine
<table>
<thead>
<tr>
<th>Detection Engine</th>
<th>#Phishing Sites</th>
<th>#Malware Sites</th>
<th>#Malicious Sites</th>
<th># URLs</th>
</tr>
</thead>
<tbody>
<tr>
<td>ADWINUS Labs</td>
<td>0</td>
<td>4</td>
<td></td>
<td>4</td>
</tr>
<tr>
<td>AegisLab WebGuard</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>AutoShun</td>
<td>863</td>
<td>0</td>
<td>0</td>
<td>863</td>
</tr>
<tr>
<td>Avira</td>
<td>2002</td>
<td>941</td>
<td>0</td>
<td>3003</td>
</tr>
<tr>
<td>BitDefender</td>
<td>0</td>
<td>191</td>
<td>0</td>
<td>191</td>
</tr>
<tr>
<td>Bluev</td>
<td>0</td>
<td>0</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<td>CLEAN MX</td>
<td>0</td>
<td>0</td>
<td>14</td>
<td>14</td>
</tr>
<tr>
<td>CRDF</td>
<td>0</td>
<td>0</td>
<td>150</td>
<td>150</td>
</tr>
<tr>
<td>CloudStat</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>Dr.Web</td>
<td>0</td>
<td>2330</td>
<td></td>
<td>2330</td>
</tr>
<tr>
<td>ESET</td>
<td>0</td>
<td>75</td>
<td>0</td>
<td>75</td>
</tr>
<tr>
<td>Emsisoft</td>
<td>1</td>
<td>43</td>
<td>13</td>
<td>44</td>
</tr>
<tr>
<td>Fortinet</td>
<td>8</td>
<td>469</td>
<td>0</td>
<td>477</td>
</tr>
<tr>
<td>Google Safebrowsing</td>
<td>0</td>
<td>13</td>
<td>2</td>
<td>15</td>
</tr>
<tr>
<td>Kaspersky</td>
<td>0</td>
<td>2</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<td>Malwarebytes hpHosts</td>
<td>0</td>
<td>1103</td>
<td></td>
<td>1103</td>
</tr>
<tr>
<td>ParetoLogic</td>
<td>0</td>
<td>800</td>
<td>0</td>
<td>800</td>
</tr>
<tr>
<td>Quick Heal</td>
<td>0</td>
<td>0</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>Quttera</td>
<td>0</td>
<td>0</td>
<td>6</td>
<td>6</td>
</tr>
<tr>
<td>SCUMWARE.org</td>
<td>0</td>
<td>8</td>
<td>0</td>
<td>8</td>
</tr>
<tr>
<td>Sophos</td>
<td>0</td>
<td>56</td>
<td></td>
<td>56</td>
</tr>
<tr>
<td>Sucuri SiteCheck</td>
<td>0</td>
<td>248</td>
<td></td>
<td>248</td>
</tr>
<tr>
<td>ThreatHive</td>
<td>0</td>
<td>8</td>
<td>8</td>
<td>8</td>
</tr>
<tr>
<td>Trendware</td>
<td>0</td>
<td>80</td>
<td>80</td>
<td>80</td>
</tr>
<tr>
<td>Websense ThreateSeeker</td>
<td>0</td>
<td>0</td>
<td>56</td>
<td>56</td>
</tr>
<tr>
<td>Yandex Safebrowsing</td>
<td>0</td>
<td>173</td>
<td>0</td>
<td>173</td>
</tr>
</tbody>
</table>
Σ #Harmful URLs = 2071, 3818, 3826, 9715
Σ #Unique Harmful URLs = 2071, 3722, 3228, 8634
Table 4: # Engines of Harmful URLs
<table>
<thead>
<tr>
<th>Detected by # Engines</th>
<th># Unique Harmful URLs</th>
</tr>
</thead>
<tbody>
<tr>
<td>3</td>
<td>1</td>
</tr>
<tr>
<td>7</td>
<td>1</td>
</tr>
<tr>
<td>6</td>
<td>2</td>
</tr>
<tr>
<td>5</td>
<td>13</td>
</tr>
<tr>
<td>4</td>
<td>63</td>
</tr>
<tr>
<td>3</td>
<td>33</td>
</tr>
<tr>
<td>2</td>
<td>751</td>
</tr>
<tr>
<td>1</td>
<td>7770</td>
</tr>
</tbody>
</table>
Σ # Unique Harmful URLs = 8634
analyzed by VirusTotal. Among the detected 8,634 URLs, we also notice that 84% of them are new harmful URLs (because of our research).
While we could just trust the detection result from VirusTotal, to confirm indeed these URLs are indeed malicious we manually examined the one that has been identified by 8 engines. Interestingly, this URL actually points to an APK file. We then visited this URL and downloaded the APK. We also submitted this suspicious APK file to VirusTotal, and this time, 14 out of 55 file scanners reported that this APK is malicious. We reverse engineered this file and found it tried to acquire the root privilege of the phone by exploiting the kernel vulnerabilities, which undoubtedly proved it is a harmful URL.
6. LIMITATIONS AND FUTURE WORK
SMARTGEN clearly has limitations. First, there might be some missing path in ECG (if an edge is missed by EdgeMiner [16]), or infeasible paths that cannot be solved (currently our solver terminates if it cannot provide any result after 300 seconds). Second, not all of the app activities have been explored, especially if there is an access control in the app. More specifically, certain app activities are only displayed if the user has successfully logged in. However, SMARTGEN did not perform any automatic registration with these 5,000 apps, and it is certainly not able to trigger these activities. How, therefore to trigger these activities for a given mobile app is one of our immediate future works.
Currently, we only demonstrated how to use the exposed URLs to detect whether an app communicates with any malicious sites. There are certainly many other applications such as server vulnerability identification [31]. For instance, we can use the generated server request messages as seeds to perform the penetration testing to see whether the server contains any exploitable vulnerabilities such as SQL injection, cross-site-scripting (XSS), cross-site request forgery (CSRF), etc. We leave the study of the vulnerability fuzzing to our another future work.
We can also apply the selective symbolic execution of SMARTGEN to solve other problems. For instance, by changing the targeted APIs to those security-sensitive ones (e.g., getDeviceId), we can collect and solve the constraints along the execution path to trigger these APIs. Through this, we are likely able to further observe how sensitive data is collected and perhaps find privacy leakage vulnerabilities in real apps. Part of our future work will also explore these applications.
7. CONCLUSION
We have presented SMARTGEN, a tool to automatically generate server request messages and expose the server URLs from a mobile app by using selective symbolic execution, and demonstrated how to use SMARTGEN to detect malicious sites based on the exposed URLs for the top 5,000 Android apps in Google Play. Unlike prior efforts, SMARTGEN focuses on the constraints from the UI elements and solves them to trigger the networking APIs. Built atop API hooking and Java reflection, it also features a new runtime app instrumentation technique that is able to more efficiently instrument an app and perform an in-context analysis. Our evaluation with the top 5,000 ranked mobile apps have demonstrated that with SMARTGEN we are able to find 297,780 URLs, and among them actually 8,634 are malicious sites according to the URL classification result from VirusTotal.
Acknowledgment
We thank VirusTotal for providing premium services during the evaluation of large volume of (new) URLs. We also thank the anonymous reviewers for their helpful comments. This research was supported in part by AFOSR under grant FA9550-14-1-0119 and FA9550-14-1-0173, NSF award 1453011. Any opinions, findings, conclusions, or recommendations expressed are those of the authors and not necessarily of the AFOSR and NSF.
8. REFERENCES
|
{"Source-Url": "http://papers.www2017.com.au.s3-website-ap-southeast-2.amazonaws.com/proceedings/p867.pdf", "len_cl100k_base": 11863, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32546, "total-output-tokens": 14201, "length": "2e13", "weborganizer": {"__label__adult": 0.0004699230194091797, "__label__art_design": 0.00033855438232421875, "__label__crime_law": 0.0012340545654296875, "__label__education_jobs": 0.0005540847778320312, "__label__entertainment": 0.00010001659393310548, "__label__fashion_beauty": 0.0001952648162841797, "__label__finance_business": 0.00020241737365722656, "__label__food_dining": 0.00025844573974609375, "__label__games": 0.001079559326171875, "__label__hardware": 0.0024280548095703125, "__label__health": 0.00042366981506347656, "__label__history": 0.0002727508544921875, "__label__home_hobbies": 9.09566879272461e-05, "__label__industrial": 0.0004134178161621094, "__label__literature": 0.0002663135528564453, "__label__politics": 0.0002932548522949219, "__label__religion": 0.000396728515625, "__label__science_tech": 0.06549072265625, "__label__social_life": 0.0001055598258972168, "__label__software": 0.023956298828125, "__label__software_dev": 0.900390625, "__label__sports_fitness": 0.00026226043701171875, "__label__transportation": 0.0003726482391357422, "__label__travel": 0.00015616416931152344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59207, 0.05755]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59207, 0.26566]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59207, 0.90167]], "google_gemma-3-12b-it_contains_pii": [[0, 5435, false], [5435, 10577, null], [10577, 16475, null], [16475, 23145, null], [23145, 27827, null], [27827, 33384, null], [33384, 40361, null], [40361, 46140, null], [46140, 53739, null], [53739, 59207, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5435, true], [5435, 10577, null], [10577, 16475, null], [16475, 23145, null], [23145, 27827, null], [27827, 33384, null], [33384, 40361, null], [40361, 46140, null], [46140, 53739, null], [53739, 59207, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59207, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59207, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59207, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59207, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59207, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59207, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59207, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59207, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59207, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59207, null]], "pdf_page_numbers": [[0, 5435, 1], [5435, 10577, 2], [10577, 16475, 3], [16475, 23145, 4], [23145, 27827, 5], [27827, 33384, 6], [33384, 40361, 7], [40361, 46140, 8], [46140, 53739, 9], [53739, 59207, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59207, 0.20732]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
af318d5b2217dee5fb1738fd77b5f08e42cfe8e9
|
Dominance Programming for Itemset Mining
Benjamin Negrevergne*, Anton Dries*, Tias Guns*, Siegfried Nijssen*†
*Department of Computer Science, KU Leuven, Belgium
†LIACS, Universiteit Leiden, The Netherlands
Email: {firstname.lastname}@cs.kuleuven.be
Abstract—Finding small sets of interesting patterns is an important challenge in pattern mining. In this paper, we argue that several well-known approaches that address this challenge are based on performing pairwise comparisons between patterns. Examples include finding closed patterns, free patterns, relevant subgroups and skyline patterns. Although progress has been made on each of these individual problems, a generic approach for solving these problems (and more) is still lacking. This paper tackles this challenge. It proposes a novel, generic approach for handling pattern mining problems that involve pairwise comparisons between patterns. Our key contributions are the following. First, we propose a novel algebra for programming pattern mining problems. This algebra extends relational algebras in a novel way towards pattern mining. It allows for the generic combination of constraints on individual patterns with dominance relations between patterns. Second, we introduce a modified generic constraint satisfaction system to evaluate these algebraic expressions. Experiments show that this generic approach can indeed effectively identify patterns expressed in the algebra.
I. INTRODUCTION
Pattern mining constitutes a well established class of tasks in data mining. The most well-known task is frequent itemset mining, which consists in finding all sets of items that have a high support in a given transactional database. Unfortunately, basic frequent itemset mining is not very useful. The number of frequent itemsets is huge in many databases, even when using high support thresholds.
A large body of work has sought to solve this pattern explosion by mining for patterns under constraints. Constraint-based pattern mining is concerned with finding all patterns \( \pi \) in a pattern language \( \mathcal{L} \) that satisfy some constraint \( \phi \) [1]:
\[
\text{Th}(\mathcal{L}, \mathcal{D}, \phi) = \{ X \in \mathcal{L} \mid \phi(X, \mathcal{D}) \text{ is true} \}. \tag{1}
\]
The constraint \( \phi \) is typically a conjunction of multiple constraints that can be defined on the pattern \( X \) or the data \( \mathcal{D} \). Given that \( \phi \) is evaluated on patterns individually, \( \phi \) is often called a local constraint [2]. Constraints usually come from domain-specific insights or are provided by the user to remove uninteresting patterns.
For example, one may require that the size of a pattern is smaller than some threshold, or that it does (not) contain certain items. One can also require that patterns have a high utility [3], that they satisfy syntactical constraints [4], or that they individually score well with respect to a given statistical test [5].
Numerous approaches to pattern mining have been developed to effectively find the patterns adhering to a set of local constraints.
The benefit of a constraint-based framework is twofold. First, users can combine existing constraints to formulate new problems according to their needs. Second, researchers can identify classes of constraints with similar properties and focus their attention on devising generic pruning strategies for these classes of constraints [6]. In particular, recent work has shown that constraint programming provides a generic framework to capture many pattern mining settings [7].
However the constraint-based pattern mining framework has limitations: several types of mining tasks cannot be formulated using constraints over individual patterns.
Let us consider the problem of relevant pattern mining [8] (or subgroup discovery [9]) as an example. This is a pattern mining task in a supervised setting where two databases are given (referred to as “positive” and “negative”). Mining relevant patterns consists in finding patterns that discriminate the positive dataset \( (\mathcal{D}^+) \) from the negative one \( (\mathcal{D}^-) \). A pattern \( P_1 \) that occurs in positive examples \( T_1^+ \) and negative examples \( T_1^- \) can be considered irrelevant in this setting if there is another pattern \( P_2 \) that occurs in positive examples \( T_2^+ \) and negative examples \( T_2^- \), for which:
\[
T_1^+ \subseteq T_2^+ \text{ and } T_1^- \subset T_2^-.
\]
Since \( P_2 \) discriminates the two datasets better than \( P_1 \), we would like to specify that \( P_2 \) is a better solution than \( P_1 \) and that \( P_1 \) is a solution only if \( P_2 \) is not. Clearly, local constraints are inadequate for this purpose because they consider patterns individually.
Relevant pattern mining is not an isolated case. Many other pattern mining settings can not be formalized adequately using conjunctions of constraints. Bonchi and Lucchese [10] have shown that combinations of closedness and monotonic constraints such as the max-cost constraint can lead to ambiguous problem definitions in the constraint-based mining framework. Moreover, Crémilleux et al. [2] highlighted the need to use global constraints on patterns (i.e. constraints whose satisfaction depends on more than one pattern) to address problems such as finding condensed representations of patterns or top-\( k \) sets of patterns.
The main insight of this paper is that these settings can be formulated using a combination of constraints and dominance relations. Dominance relations are pairwise preferences between patterns. They can be used to express the idea that a pattern \( P_1 \) is preferred over another pattern \( P_2 \), or, in our terminology, that \( P_1 \) dominates \( P_2 \).
Building on this observation we introduce a unified algebra that can express both constraints and dominance relations. A key component of this framework is the dominance algebra, which allows to compose pairwise dominance relations into complex pre-orders among the patterns. We will show that many settings in the pattern mining literature can be formulated elegantly using compositions of constraints and dominance relations. Here, we do not only consider relevant subgroups, but also maximal patterns [11], closed [12] and free patterns [13] with any type of local constraint, sky patterns [14], dominated patterns in PN spaces [15], as well as new settings.
Another important contribution of this paper is that we demonstrate that this framework does not only provide a uniform approach to formulate these tasks, but also leads to a generic and effective method to search for solutions. Indeed, both local constraints and dominance relations can be used in a generic way to prune the search space. The expressions in this algebra can be evaluated effectively using a modified version of a constraint programming system [7]. Our experiments demonstrate that the resulting system exploits the dominance relations effectively and performs better than naive approaches, and in several cases, even better than specialized algorithms.
The paper is organized as follows: Section II gives an overview of several well-known pattern mining settings and the basic principles of dominance programming; Section III introduces the unified algebra to describe constraints and dominance relations. Section IV provides examples of problems expressed in the dominance algebra. Section V describes a method for evaluating expressions in this algebra based on constraint programming technology. Section VI evaluates the approach experimentally.
II. DOMINANCE RELATIONS IN ITEMSET MINING
In this section, we introduce several well-known itemset mining settings, and demonstrate how the principle of dominance programming can be used to formulate these settings.
The itemset mining problem can be defined as follows. Let $I = \{1, \ldots, n\}$ be a set of items and $T = \{1, \ldots, m\}$ a set of transaction identifiers. A dataset is a set $D \subseteq I \times T$. The cover of an itemset is defined as:
$$cover_D(X) = \{ t \in T : \forall i \in X, (i, t) \in D \}$$
(2)
and contains the identifiers of transactions in which all items of $X$ occur.
Frequent itemset mining consists in enumerating all the subsets $X$ of $I$ whose cover is larger than a user defined minimum frequency threshold $\theta$:
$$Th_{fi} = Th(I, D, p) = \{ X \subseteq I | |cover_D(X)| \geq \theta \}. \quad (3)$$
A key observation in this paper is that many settings are cumbersome to formulate with constraints only, but are more easily described with combinations of constraints and dominance relations. This includes maximal, closed and free itemset mining, relevant subgroup discovery and sky patterns. We will first introduce these problems, where at this moment we closely follow the notation used in the papers that introduced these settings.
a) Maximal Itemset Mining: Maximal frequent itemsets are maximal in that there exists no larger itemset that is still frequent:
$$\{ X \subseteq I | |cover_D(X)| \geq \theta \land \exists Y \ni X : cover_D(Y) \geq \theta \} \quad (4)$$
Here, $Y$ dominates $X$ iff $Y \supset X \land cover_D(Y) \geq \theta$. Observe that $cover_D(Y) \geq \theta$ can be computed independent of $X$ and is hence a local constraint. To mine maximal patterns, we can simply introduce a dominance relation between two patterns $X$ and $Y$ stating that $Y$ dominates $X$ iff $Y \supset X$; we are interested in those patterns that are not dominated within the set of all itemsets that are frequent.
b) Closed Frequent Itemset Mining: This setting was introduced by Pasquier et al. [12]. It can be formalized as the problem of finding
$$\{ X \subseteq I | |cover_D(X)| \geq \theta \land \exists Y \ni I : Y \supset X \land cover_D(Y) = cover_D(X) \} \quad (5)$$
Hence $Y$ dominates $X$ iff $Y \supset X \land cover_D(Y) = cover_D(X)$. In this case, if a solution is not dominated by any other, it is a closed itemset.
c) Free Itemset Mining: Free itemsets are the minimal generators of the closed frequent itemsets [13]. The difference with closed itemsets is that we now prefer the smallest subsets among patterns that cover the same transactions. The dominance relation is: $Y$ dominates $X$ iff $Y \subset X \land cover_D(Y) = cover_D(X)$.
d) Relevant Subgroup Discovery: This example was already mentioned informally in the introduction. The term subgroup discovery or discriminative itemset mining [16] is often used when each transaction in the database has an associated label, for example positive or negative. The complete definition of relevant subgroup discovery (of which the one in the introduction is a special case) is the following:
$$\{ X \subseteq I | |cover_D+(X)| \geq \theta \land \exists Y \ni I : cover_D+(Y) \supset cover_D+(X) \land cover_D-(Y) \subseteq cover_D-(X) \land (cover_D(X) = cover_D(Y) \rightarrow Y \supset X) \} \quad (6)$$
The last condition states that if two patterns cover exactly the same set of transactions, the one with the largest set of items is preferred.
The dominance relation here is that $Y$ dominates $X$ iff $cover_D+(Y) \supset cover_D+(X) \land cover_D-(Y) \subseteq cover_D-(X) \land (cover_D(X) = cover_D(Y) \rightarrow Y \supset X)$.
e) Sky Patterns: A last example are the recently introduced sky patterns [14]. The problem of mining sky patterns is similar to that of finding the Pareto-optimal front for a multi-objective optimization problem. More formally, let $m_1(I)$ and $m_2(I)$ be two measures that can be calculated for any itemset
I. For example, $m_1$ measures the size of the itemset and $m_2$ measures its frequency. The problem of finding all sky patterns given $m_1$ and $m_2$ can be formalized as:
$$\{X \subseteq I \mid |cover_D(X)| \geq 1 \land \exists Y \subseteq I : |cover_D(Y)| \geq 1 \land (m_1(Y) > m_1(X) \land m_2(Y) \geq m_2(X)) \lor (m_1(Y) \geq m_1(X) \land m_2(Y) > m_2(X))\}$$
Here $|cover_D(X)| \geq 1$ is a local constraint; the dominance relation is that $Y$ dominates $X$ iff $(m_1(Y) > m_1(X) \land m_2(Y) \geq m_2(X)) \lor (m_1(Y) \geq m_1(X) \land m_2(Y) > m_2(X))$.
The above examples provide the intuition that a dominance relation captures a wide range of non-local constraints and pattern mining tasks. The problem of how to specify them in a simple general framework is addressed next.
III. An Algebra for Dominance Programming
The proposed algebra will allow us to specify both the local constraints and the dominance relations of the above problems in a concise way. The algebra consists of two parts: a constraint algebra, which will be used to express local constraints, and a dominance algebra, which will be used to express the dominance relations; the main novelty in our work is the use of an algebra to combine constraints with dominance relations, and the use of an algebra to specify dominance relations.
A. A Constraint Algebra for Local Constraints
Our approach combines ideas from database theory with ideas from constraint programming. Central is the idea that a local pattern mining problem can be seen as a constraint satisfaction problem (CSP) [7], where each pattern corresponds to a solution of the CSP. More formally, a CSP $P = (V, D, C)$ [17] is specified by:
- a finite set of variables $V$;
- an initial domain $D$, which maps every variable $v \in V$ to a finite set of values $D(v)$;
- a finite set of constraints $C$.
Solving a CSP corresponds to finding an assignment to the variables in $V$ from their domain $D(v)$ such that all constraints in $C$ are satisfied.
We can represent itemset mining problems with local constraints as constraint satisfaction problems, following the methodology of De Raedt et al. [7]. As an example consider the problem of frequent itemset mining. The problem of frequent itemset mining can be represented by means of two sets of variables:
- $I = \{i_1, \ldots, i_n\}$, which represent the items;
- $T = \{t_1, \ldots, t_m\}$, which represent the transactions.
Hence, $\mathcal{V} = I \cup T$. All variables $v \in \mathcal{V}$ have a binary domain: $D(v) = \{0, 1\}$. Finally, we impose the following constraints between these variables:
- coverage, stating that a transaction is covered (Equation 2) iff it contains all items in the itemset represented by the item variables. The constraint that should be satisfied between the variables in $I$ and $T$ is hence: $cover_D(I, T) \equiv (\forall t_j \in T : t_j = 1 \leftrightarrow (\forall i_k \in I : (i_k = 1 \rightarrow (j,k) \in D));$
- minimum frequency, stating that the sum of the transaction variables exceeds a given threshold $\theta$:
support$(T, \theta) \equiv \sum_{t_j \in T} t_j \geq \theta$.
One way of looking at this, is that from all potential assignments to the variables in $\mathcal{V}$, the constraints select a subset. This is illustrated in Figure 1.
We can observe a similarity between database querying and constraint satisfaction at this point. If we would have a table with all potential assignments to all variables (hence, we would materialize the table in Figure 1(b), where each column of the table corresponds to a variable in the CSP), we could find the solutions to the itemset mining problem by only selecting those rows (assignments) from the table that satisfy the constraints. Our algebra exploits this observation, allowing the reuse of concepts of relational algebra to formalize mining problems.
Expressions in our algebra for specifying constraint satisfaction problems are defined as follows.
Definition 1 (Constraint Algebra). Expressions in the constraint algebra are inductively defined as follows.
- (generator) let $a$ and $b$ be integers, then $\{a, \ldots, b\}$ is an expression in our algebra; it defines a table with a single column of length $|\{a, \ldots, b\}|$ where each row corresponds to a different value from $\{a, \ldots, b\}$
- (product) let $E_1$ and $E_2$ be expressions in our algebra, then $E_1 \times E_2$ is an expression in our algebra; let $T_1$ and $T_2$ be the tables represented by $E_1$ and $E_2$, then $E_1 \times E_2$ defines the table $\{(v_1, \ldots, v_n, u_1, \ldots, u_m) \mid (v_1, \ldots, v_n) \in T_1, (u_1, \ldots, u_m) \in T_2\}$
- (power) let $E$ be an expression in our algebra and let $n$ be an integer, then $E^n$ is an expression in our algebra; let $T$ be the table represented by $E$, then $E^n$ represents the table $T \times T \times \cdots \times T$, where the product is taken $n - 1$ times
- (renaming) let $E$ be an expression in our algebra and $V$ an identifier, then $\lambda_V(E)$ is an expression; let $T$ be the table represented by $E$, then $\lambda_V(E)$ represents the table $T$ in which all columns have been renamed with names
V[1]...V[n], where n is the number of columns in T. If n = 1, the name is assumed to be V; V will be referred to as a variable name.
- (selection) Let E be an expression in our algebra and let c be a constraint, then \( \sigma_c(E) \) is an expression in our algebra as well; let T be the table represented by expression E, then \( \sigma_c(T) \) represents the table with all rows in T that satisfy constraint \( \varphi \).
Note that we made a deliberate choice to use a notation which is close to that of relational algebra. As an example, we can formalize the problem of frequent itemset mining with the following expression.
\[ E_{fi} \equiv \sigma_{\text{support}(T, \theta)} (\sigma_{\text{cover}(I, T)} (\lambda_I (\{0,1\}^n) \times \lambda_T (\{0,1\}^m))) \]
The expression reads as follows:
1. The generator \( \lambda_I (\{0,1\}^n) \times \lambda_T (\{0,1\}^m) \) conceptually describes all the tuples \((i_1, \ldots, i_n, t_1, \ldots, t_m)\) in \( I^n \times T^m \). Each tuple represents an itemset and a set of transaction identifiers.
2. The inner select operator \( (\sigma_{\text{cover}(I, T)}(\ldots)) \) selects only the tuples in which a value \( t_i = 1 \) iff the transaction \( t_i \) covers the itemset \((i_1, \ldots, i_n)\). These are all the tuples that represent itemsets and their corresponding cover.
3. The outer select operator \( \sigma_{\text{support}(T, \theta)}(\ldots) \) selects only the tuples in which the number of \( i_i \)'s equal to 1 is greater than \( \theta \). In other words, all the tuples that represent frequent itemsets and their cover.
The selection operator in our algebra can in principle use all constraints available in traditional constraint programming systems [17]. Indeed, one can see that expressions in the constraint algebra correspond closely to the basic elements of a CSP: the \( \lambda \) operator essentially introduces variables, while the \( \sigma \) operator introduces constraints between these variables. Consequently, an expression in the constraint algebra could rather straightforwardly be evaluated using generic constraint programming systems, as was shown in [18].
B. A Dominance Algebra for Programming Pre-orders
We will now discuss how the dominance relationship introduced in Section II can be formalized in an extension of the constraint algebra. The main idea is to express the domination relations as pairwise preferences between assignments to variables. The main theoretical tool is that of preorders.
**Definition 2.** Let \( P \) be a set and let \( R \) be a binary relation over elements in \( P \), i.e. \( R \subseteq P \times P \); then \( R \) is a preorder if:
- (transitivity) if \( xRy \) and \( yRz \), then \( xRz \);
- (reflexivity) for all \( x \in P \): \( xRx \).
Here \( xRy \) is a shorthand for \((x,y) \in R\).
In our case, the set \( P \) will be a set of solutions to a CSP, i.e. \( P \) is the set of all rows in a table \( T \) defined by an expression \( E \) in the constraint algebra.
For a given preorder, we can now define the dominance operator in our algebra:
**Definition 3 (Dominance Operator).** Let \( E \) be an expression in the constraint algebra and let \( T \) be the table represented by the expression \( E \). Furthermore, let \( R \) be a preorder over the elements in \( T \). Then \( \sigma_R(E) \) represents the following table:
\[ \sigma_R(T) = \{ \bar{x} \in T \mid \forall \bar{y} \in T : \bar{y}R\bar{x} \rightarrow \bar{x}\bar{y} \} \]
i.e., the set of all rows that are not strictly dominated according to the preorder, that is, they are only dominated by equivalent solutions.
As an example, consider \( T = \{a,b,c,d\} \) and \( R = \{(a,a), (b,b), (c,c), (d,d), (a,b), (b,a), (c,d)\} \). According to the dominance relation \( R \), \( a \) and \( b \) are equivalent (because \((a,b)\) and \((b,a)\) are both in \( R \)) but incomparable to either \( c \) or \( d \). Moreover, \( d \) is dominated (but not equivalent) to \( c \). We thus have \( \sigma_R(T) = \{a,b,c\} \).
The main remaining question is now how to specify the preorder \( R \). Clearly an extensional definition of \( R \) is not practical when the number of tuples is large; therefore we introduce an algebra for programming preorders.
**Definition 4 (Preorder Algebra).** Expressions \( E \) in the preorder algebra, for a table \( T \) with column names \( V \), are inductively obtained as follows:
- let \( v \) be a variable (column) in \( V \), then \((\geq_v), (\leq_v)\) and \((=_v)\) are expressions in the preorder algebra. They define preorders
\[ (\geq_v) \equiv \{(\bar{x}, \bar{y}) \mid \bar{x}, \bar{y} \in T, x_v \geq y_v\} \]
and
\[ (\leq_v) \equiv \{(\bar{x}, \bar{y}) \mid \bar{x}, \bar{y} \in T, x_v \leq y_v\}; \]
finally,
\[ (=_v) \equiv \{(\bar{x}, \bar{y}) \mid \bar{x}, \bar{y} \in T, x_v = y_v\}; \]
here \( x_v \) and \( y_v \) denote the values of variable \( v \) in tuple \( \bar{x} \) and \( \bar{y} \), respectively; note that the values each variable \( v \) can take are assumed to be ordered;
- let \( E_1 \) and \( E_2 \) be expressions in the preorder algebra, then \( E_1 \land E_2 \) is an expression in the preorder algebra as well; let \( R_1 \) and \( R_2 \) be the preorders identified by these expressions, then \( E_1 \land E_2 \) defines a preorder
\[ (\bar{x}, \bar{y}) \in R_1 \land R_2 \Leftrightarrow (\bar{x}, \bar{y}) \in R_1 \land (\bar{x}, \bar{y}) \in R_2. \]
Let us consider the example of maximal frequent itemset mining to illustrate this algebra. Remember from Section II that the problem can be formalized as follows:
\[ \{X \subseteq I \mid |\text{cover}_D(X)| \geq \theta \land \exists Y \ni D \geq Y \geq X \} \]
with the following dominance relation: \( Y \text{ dominates } X \) iff \( Y \supseteq X \). In our tabular representation of potential solutions, this means that a row \( \bar{x} \) representing one pattern, dominates another row \( \bar{y} \) representing another pattern, iff for each column \( v \) representing an item, the inequality \( x_v \geq y_v \) holds. We can formalize this preorder by this expression:
\[ \land_{i \in I}(\geq_i), \]
where \( I \) is the set of columns corresponding to item variables.
To obtain the maximal frequent itemset mining problem, we can now formulate this dominance relation in our algebra, and apply it to the expression representing all frequent itemsets \((E_{fi})\):
\[
\sigma_{\wedge_{i \in I}(\geq)}(\sigma_{\text{support}(T,\theta)}(\sigma_{\text{cover}(I,T)}(\lambda_I(\{0,1\}^n) \times \lambda_T(\{0,1\}^m))))
\]
For example, in Figure 1 the final set of patterns determined by this expression would consist of a single row \((0, 1, 1, 1, 0)\) as it dominates the row \((0, 0, 1, 1, 1)\) (\(I\) variables indicated in bold).
We believe that this closely corresponds to the intuition most researchers have about this problem: among all frequent itemsets, we are interested in finding only those that are maximal.
### IV. Examples of Dominance Programming
We now show how the examples in Section II, as well as others, can be expressed in the algebra.
#### A. Closed Frequent Itemset Mining
The expression for this problem extends that of the frequent itemset mining problem. In addition, we need to express that if two itemsets cover the same set of transactions, we prefer the larger one. We can formalize this in the preorder algebra with the following expression:
\[
(\wedge_{i \in T}(=)) \land (\wedge_{i \in I}(\geq)).
\]
The closed frequent itemset mining problem is then expressed in total with:
\[
\sigma_{(\wedge_{i \in I}(\geq)) \land (\wedge_{i \in T}(=))}(E_{fi})
\]
#### B. Free Itemsets
From a dominance point of view, the only difference with closed itemsets is that now subsets dominate the supersets covering the same transactions:
\[
(\wedge_{i \in T}(=)) \land (\wedge_{i \in I}(\leq)).
\]
Free frequent itemset mining can hence be expressed as:
\[
\sigma_{(\wedge_{i \in I}(\leq)) \land (\wedge_{i \in T}(=))}(E_{fi})
\]
#### C. Cost-based Itemset Mining
We now first demonstrate how additional local constraints can be expressed in our algebra. In the next section, we will extend this formulation with dominance relations. A prototypical local constraint is a constraint on the cost of an itemset, assuming every item has an individual cost [4]. Given a cost vector \( C = (c_1, \ldots, c_n) \) that contains a cost \( c_i \) for every item \( i \), the cost of an itemset \( I \) can be computed as follows:
\[
\text{cost}(I) = \sum_{i \in I} C_i.
\]
Using our algebra, we extend the standard frequent itemset mining expression with a variable \( c \) and constrain it to the cost of the itemset:
\[
E_{fic} \equiv \sigma_{c=\text{cost}(I)}(E_{fi} \times \lambda_c(\{0, \ldots, n\}))
\]
### D. Cost-based Itemset mining and Dominance Relations
As studied by [10], combining closed itemset mining with cost-based itemset mining with a maximum cost can result in different solutions depending on the interpretation: one can either mine all closed itemsets and filter out the ones with a too high cost (as one would do in post-processing), or calculate the closure of all itemsets that have a cost lower than some threshold. While the former is typically implemented in existing systems (out of practical reasons), the latter is actually more meaningful.
Our algebra allows to express both variants. Let \( \sigma_{clo} \equiv \sigma_{(\wedge_{i \in T}(=)) \land (\wedge_{i \in I}(\geq))} \), then the closed itemsets that are not too costly are formalized as:
\[
\sigma_{c\leq\theta}(\sigma_{clo}(E_{fic}));
\]
and the closure over the itemsets that are not too costly:
\[
\sigma_{clo}(\sigma_{c<\theta}(E_{fic})).
\]
This demonstrates that the algebra is rich enough to cover settings that have been problematic in the standard constraint-based mining framework up to now and also does this in an intuitive way.
#### E. Sky Patterns
We illustrate the sky pattern setting on an example by Soulet et al. [14]. They address the problem of extracting sky patterns with respect to the frequency and area measures. The frequency of an itemset is the size of its cover:
\[
\text{freq}(I) = |\text{cover}(I)|,
\]
and the area of an itemset corresponds to the dimension of the itemset in terms of items and transactions:
\[
\text{area}(I, T) = |\{(i,t)|i \in I, t \in T\}| = |I| \times |T|.
\]
We can reuse the expression for frequent itemset mining here, with the assumption that \( \theta = 1 \).
Next, as was the case for cost-based itemset mining, we add two integers \( f \) and \( a \) representing the frequency and the area respectively:
\[
E_{i2} \equiv \sigma_{f=\text{freq}(I)}(E_{fi} \times \lambda_f(\{0, \ldots, m\}))
\]
\[
E_{i3} \equiv \sigma_{a=\text{area}(I,T)}(E_{i2} \times \lambda_a(\{0, \ldots, n \times m\}))
\]
Then mining the sky itemsets with respect to the set of measures \( \{\text{frequency, area}\} \) can be formalized as follows:
\[
\sigma_{(\geq_a)\land (\geq_f)}(E_{i3})
\]
### F. Relevant Subgroup Discovery
The setting of relevant subgroup discovery introduced earlier can be expressed using the following expression:
\[
\sigma_{(\wedge_{i \in T^+} (=)) \land (\wedge_{i \in I}(\geq))} \sigma_{(\wedge_{i \in T^-} (=)) \land (\wedge_{i \in I}(\leq))} \left( \sigma_{\text{cover}(I,T^+)}(\lambda_{T^+}(\{0,1\}^m) \times \lambda_I(\{0,1\}^n)) \right)
\]
\[
\sigma_{\text{cover}(I,T^-)}(\lambda_{T^-}(\{0,1\}^m) \times \lambda_I(\{0,1\}^n)))
\]
In this expression, the inner-most dominance operator preserves patterns with the same transaction set because they are considered equivalent. The second operator ensures that among such patterns the largest ones are preferred, as required.
G. Dominated Patterns in PN Space
Relevant subgroups dominate each other based on the positive and negative transactions covered. Instead, one could also impose that a pattern dominates another pattern if it simply covers more (or equal) positive transactions and less (or equal) negative transactions. The problem is similar to that of finding all patterns on the convex hull in PN space [15] and is related to that of finding sky patterns.
In our algebraic notation, this can be expressed by introducing integers $p$ and $q$ that represent the number of positive/negative transactions covered, and imposing the above mentioned dominance relation:
$$
\sigma_{\leq}(\geq_{p}) \land (\geq_{q} \Rightarrow \sigma_{p=\mathit{freq}^+(I)} \land \sigma_{q=\mathit{freq}^-(I)} (\lambda_{1}((0, 1)^{m}) \times \lambda_{p}((0, \ldots, m^{+})) \times \lambda_{q}((0, \ldots, m^{-})))
$$
H. Novel settings
The algebra is not restricted to formulating existing problems. It provides a general yet well-founded way to express any combination of local constraints and domination relations.
For example, one can formulate the problem of finding the smallest maximal itemsets as follows:
$$
\sigma_{\leq}(\sigma_{p \land q \in \mathbb{Z}_{\geq}}(\sigma_{p=\mathit{freq}^+(I)} \land \sigma_{q=\mathit{freq}^-(I)} (E_{fi} \times \lambda_{v}(\sigma_{s=1})))).
$$
As far as we now, there exists no algorithm capable of addressing this simple problem.
While the above has shown how a wide range of mining tasks can be formulated using the algebra, more than is possible in existing constraint-based mining frameworks, we will now explain a general way to solve problems expressed in this algebra.
V. Evaluating Expressions in the Dominance Algebra
To evaluate expressions formalized in the dominance algebra, we first show that every expression can be reduced to a normal form. Building on earlier work [18], we then propose to evaluate expressions using constraint programming techniques. We will give a brief introduction to constraint programming systems; then we will discuss the modifications needed to handle the dominance algebra.
A. Normal Form
We can observe the following property on the product $\times$ in our algebra.
**Lemma 1.** Given two dominance algebraic expressions $E_{1}$ and $E_{2}$, $\sigma(E_{1}) \times E_{2}$ is equivalent to $\sigma(E_{1} \times E_{2})$, i.e., both expressions define the same solution set, where $\sigma$ is either a dominance operator or a selection operator that only uses variables present in $E_{1}$.
**Proof:** For the selection operator this property carries over straightforwardly from a similar property for the relational algebras in database theory. For the dominance operator we can prove the two directions: if $(\vec{x}, \vec{y}) \in \sigma_{R}(E_{1}) \times E_{2}$, then $\vec{x} \in \sigma_{R}(E_{1})$; then all solutions $(\vec{x}, \vec{y})$ with the same $\vec{x}$ but different $\vec{y}$ must be equivalent under $R$, as $R$ does not depend on the variables in $\vec{y}$. The operator $\sigma_{R}$ returns all equivalent solutions, and hence $(\vec{x}, \vec{y}) \in \sigma_{R}(E_{1} \times E_{2})$. The other direction can be shown using similar properties.
As a consequence of this lemma, we can always rewrite an expression in the dominance algebra in a normal form of the following kind:
$$
\sigma_{1}(\sigma_{2}(\cdots \sigma_{n}(\lambda_{v_{1}}(\{1, \ldots, c_{1}\}) \times \cdots \lambda_{v_{m}}(\{1, \ldots, c_{m}\})) \cdots))
$$
where $\sigma_{1}, \ldots, \sigma_{n-1}$ are either dominance or selection operators, and $\sigma_{n}$ is a selection operator. Using Lemma 1 one can see that any product of two expressions can always be rewritten by pushing one of the operands of the product deeper in the expression, unless both expressions introduce variables.
We will use constraint programming systems to process expressions in this normal form.
B. Constraint Programming
Constraint programming (CP) systems are general systems for solving constraint satisfaction problems (CSP). Thus these systems can be used to evaluate the constraint algebra defined in Section III-A. An expression of the form $\sigma_{\varphi}(\lambda_{v_{1}}(\{1, \ldots, c_{1}\}) \times \cdots \times \lambda_{v_{m}}(\{1, \ldots, c_{m}\}))$, where $\sigma_{\varphi}$ is a selection operator, can easily be evaluated by a constraint programming system by entering all defined variables and all constraints in $\varphi$ as constraints in the CP system. This is possible for all the constraints studied in this paper.
Algorithm 1 gives a high-level overview of a CP system. Essentially, a CP system is a depth-first search algorithm. The system maintains a domain of potential values for each variable in the CSP; $D(x)$ denotes the possible values that a variable $x$ can still take. Then the CP system searches for assignments of the variables that satisfy all the constraints simultaneously by shrinking the domains of the variables. To shrink the domain $D(x)$ of a variable $x$, the CP system uses two mechanisms: constraint propagation and search.
Constraint propagation takes a (partial) solution and evaluates all the constraints, which are stored in a global constraint store $\varphi$ (line 1). Propagation can have several effects: it can detect failure (i.e., the current partial solution can never be extended to a full solution), it can remove a constraint from consideration, or it can shrink the domain of variables. For
---
**Algorithm 1 Constraint-Search(Domain: $D$)**
1: $D := \text{Propagate}(D, \varphi)$
2: if constraints were violated then
3: return
4: end if
5: if $\exists x \in V : |D(x)| > 1$ then
6: $x := \text{Select-Variable}(V)$
7: for all $d \in D(x)$ do
8: Constraint-Search($D \cup \{x \mapsto \{d\})$
9: end for
10: else
11: Output solution
12: end if
example, consider we have a constraint on three boolean variables that states that \( \alpha \lor \beta \lor \gamma = 1 \). Given a partial solution \((1, 1, 1)\), propagation will detect that the constraint is satisfied and can be removed; given a partial solution \((0, 0, 0)\), we can detect that \( \gamma \) must be 1. For each constraint, a CP system includes built-in algorithms to perform these reasoning steps.
When no more propagation is possible, the constraint solver invokes the search procedure. The search procedure selects an unassigned variable (line 6) according to a user-defined heuristic (for example the variable with the smallest domain) and assigns it with a value (line 7). The order in which variables are selected is known as variable ordering; the order in which values are selected is known as value ordering. The value and variable ordering do not affect the correctness of the algorithm, but may affect its efficiency.
After the assignment, the solving procedure is called recursively (line 8).
C. Evaluating Dominance Expressions with CP
Using a constraint programming system such as the one shown in Algorithm 1, we can employ the following straightforward strategy for evaluating any expression in the normal form:
1. run a CP system to evaluate \( \sigma_\varnothing(\lambda_{y_1}(\{1, \ldots, c_1\}) \times \cdots \times \lambda_{y_m}(\{1, \ldots, c_m\})) \) and store the result set;
2. post-process the resulting set of solutions iteratively by applying implementations of \( \sigma_{n-1}, \ldots, \sigma_1 \) consecutively.
The implementations take a set as input and remove all the non-dominating solutions.
While correct, this approach is not very efficient: the first step generates the complete set of patterns satisfying the constraints, even if the dominance relation is likely to eliminate most of them during the post-processing step. Maintaining and post-processing a large number of intermediate results is computationally expensive. In order to reduce the size of the intermediate solution step, we combine two strategies: (1) update the constraint satisfaction problem to eliminate unwanted solutions from the unexplored search space and (2) influence the value ordering in order to maximize the impact of strategy (1).
The main idea is to use the inner-most dominance operator \( \sigma_{n-1} = \sigma_R \) to guide and constrain the search of the CP system. To this end, we modify Algorithm 1 such that, instead of just outputting a solution for post-processing (line 11), we also update the set of constraints such that the CP system will avoid producing any solutions dominated by the current solution. In general, we can represent a dominance relation \( R \) as
\[
\bigwedge_{v \in V} (\geq_v) \land \bigwedge_{w \in W} (\leq_w).
\]
For each solution \( D \), outputted by Algorithm 1, it therefore suffices to add the constraint
\[
\bigvee_{v \in V} (v > D(v)) \lor \bigvee_{w \in W} (w < D(w)) \lor \bigwedge_{v \in V \cup W} (v = D(v)),
\]
which states that the future solution is either not dominated by \( D \), or that it is equivalent with it (with respect to the variables used in the dominance relation).
By employing this strategy, the CP system will avoid generating solutions that do not satisfy the dominance relation. However, this strategy can only discard solutions that have not been generated yet. In order to maximize the effectiveness of this strategy, we should therefore also take care of selecting a search order in which solutions are produced in the most beneficial order. In the case of a single dominance operator, we can always derive an optimal search order for which we can guarantee that any solution produced by the modified Algorithm 1 satisfies the dominance relation, thus eliminating the need for post-processing. For each variable \( v \) in the dominance relation \( R \), we simply assign values from smallest to largest (in case of \( \leq v \)) or largest to smallest (in case of \( \geq v \)). (Note that the order in which we select the variables is irrelevant, only the order in which we select the values for each variable matters.)
In the general case of multiple stacked dominance operators, it is often impossible to find such an optimal order and we need to use post-processing. In this case, it suffices to enumerate all solutions in reverse order and apply the same constraint update procedure as before (but replacing the CP search algorithm by an enumeration of the intermediate solutions).
It is also important to note that the search order can also significantly impact the efficiency of the constraint programming system, such that the optimal search order for the dominance relation may not be the same as the (often unknown) optimal order for solving the underlying constraint satisfaction problem. In some cases, it may therefore be beneficial to diverge from the optimal search order and use one that is better suited for the underlying CSP. Examining this trade-off is an important part of future work.
VI. EXPERIMENTS
In this section we evaluate our generic dominance programming approach on several tasks. We have implemented dominance programming by extending the state-of-the-art Gecode constraint solving system [19], version 4.0.0. The implementation is available on the authors’ website. The formulations for the local constraints were taken from CP4IM [18]. In the following experiments, we compare with ACMiner (v1.0) [13] and Eclat as implemented by Borgelt (v4.0) [20]; LCM (v5.3) [21] and CP4IM (v3.7.3) [18]; and Aetheris (v0.0.2) [14]. All datasets come from the UCI machine learning repository and were obtained online\(^1\). A description of the datasets can be found in [18]. Unless mentioned otherwise, the datasets plotted show representative results. The experiments were run on computers running Ubuntu 12.04 with quad-core Intel i7 processors and 16Gb of ram.
A. Closed and free itemset mining
As a baseline comparison, we compare our system with state-of-the-art systems on the task of closed and free frequent
\(^1\)http://dtai.cs.kuleuven.be/CP4IM/datasets/
itemset mining. Note that we do not have the ambition to be faster on these specific well-studied problems. Indeed, it is not expected that a constraint programming approach is faster on these tasks, as can be seen by comparing the runtime of the CP-based CP4IM system with Eclat and LCM in Figure 2 (top). Furthermore, closed frequent itemsets can be enumerated in polynomial delay while our system adds constraints for every solution found. However, such optimizations do not carry over to other settings, while our system can handle a large number of settings in a generic way, as we will see.
Figure 2 (bottom) shows similar results for free frequent itemset mining. Note that the general CP4IM system cannot handle this mining task in a simple way, while ours can.
B. Combining closed and cost constraints
As explained in [10] and mentioned earlier, combining the closedness constraint with a minimum frequency and maximum cost constraint can be done in two different ways. The first formulation (Equation 12) represents the naive but algorithmically simpler interpretation, while the second formulation (Equation 11) represent the more meaningful interpretation of taking the closure of all itemsets with a cost lower than some threshold.
Both can be formulated in our framework, and we report only on the second formulation. We used a unary cost for each item and threshold it at 10, respectively 5, percent of the maximum size. Figure 3 (top) shows runtimes for the two settings, while the bottom figures show the number of patterns. The runtime of LCM represents the minimum amount of time needed in case one would post-process all closed patterns. We can observe that a higher threshold leads to fewer patterns and lower runtimes, indicating the effectiveness of the search procedure. This is most obvious for low minimum support and cost thresholds, where the search is much more efficient than a post-processing approach would be.
C. Relevant subgroup discovery
We next compare our system on the task of relevant subgroup discovery to the proposed approach of Garriga et al [9].
They propose a two-step post-processing approach [9];
1) extract the set of all frequent closed patterns \( \mathcal{P}^+ \), on the dataset with positive transactions;
2) post-process this set by removing every pattern \( X \in \mathcal{P}^+ \) that has a subset \( Y \subseteq X \) with the same cover in the negative transactions.
Due to the lack of implementation and as advised by the original authors, we used LCM for step one and implemented a post processor in C++ for step 2.
Figure 4 shows the runtime (top) and number of solutions (bottom). For most of the datasets of the UCI repository, the number of closed-on-the-positive patterns is close to the number of relevant subgroups (such as in the german-credit dataset). In such cases, almost no additional pruning can be done and the post-processing approach is the most efficient. However, when the number of closed patterns diverges from the number of subgroups (such as in the hepatitis dataset), the post-processing approach has to handle an overwhelming number of closed patterns and thus performs poorly. In contrast, our approach performs efficiently because it can prune false-positive candidate patterns (i.e. candidate patterns that have subsets in the negative transactions).
D. Sky patterns
Finally, we compare our system to the specialized sky pattern mining system Aetheris [14]. The task is to find all sky patterns according to the frequency and area measures, as explained in Section IV. Following the experimental protocol in [14], we also add a minimum support threshold to better study the behavior of the systems.
It is worth mentioning that for this problem the branching strategy was to select the smallest itemsets first (based on the observations made by [18]). This branching strategy is not optimal in the sense that it requires post-processing but is more efficient in practice.
Figure 5 (top) shows that for decreasing minimum support thresholds, our system is increasingly more successful in efficiently pruning the search space compared to Aetheris. The bottom figures show that a post-processing approach would not
be feasible for low thresholds as the difference between the number of closed patterns and sky patterns increases rapidly (note the exponential scale). On the contrary, our system is not impacted by the number of intermediate frequent closed patterns and is thus able to mine datasets without the frequency constraint (i.e. with $\theta = 1$).
VII. RELATED WORK
The approach presented in this paper has several key features, which have been studied individually in the past:
Constraint programming: As discussed earlier in detail, our approach extends the work of De Raedt et al. [7], which showed that constraint programming is an effective and generic paradigm to address and solve constraint based itemset mining problems.
Constraint programming has also been proposed as a solution for the problem of $k$-pattern set mining [22]. However, in this work a fixed size $k$ of the output was assumed. The algebraic approach presented in this paper does not assume a fixed pattern set size and is more scalable.
Within the constraint programming literature, our approach is related to CP-Nets [23]. CP-Nets provide a generic approach for specifying preference relations between solutions; they can be seen as an alternative formalism for specifying dominance relations. However, our algebra is more practical for the specification of orders in pattern mining.
Generic pattern mining and condensed representations: A class of well-known generic methods are those that search for borders in version spaces under monotonic and anti-monotonic constraints [24], [25]. Our work is different in several ways. First, the constraints in our framework are not necessarily monotonic or anti-monotonic. Second, we rely on generic constraint solving technology to support this wide range of constraints. Moreover [26], proposes an algorithm that can address a broader range of constraints, but do not provide a language to describe them. Our dominance algebra represents a very different approach to problem formalization.
Most other generic approaches focus only on local constraints; they do not take into account relationships between patterns and do not build on constraint satisfaction technology [4], [6].
Multi-objective optimization: The framework that we propose in this paper is closely related to multi-objective optimization problems (MOOP) [27], and the identification of Pareto optimal sets [28]. Our dominance algebra puts a much stronger focus on expressing dominance relations and clarifies the relationships between MOOPs and itemset mining problems. Furthermore, our dominance algebra is explicitly developed to support preorders over large numbers of variables. In practice, the number of variables over which a dominance is defined in MOOPs is typically smaller.
Skyline patterns and queries: The work by Soulet et al. on skyline patterns [14] can be seen as a direct application of the MOOP framework to pattern mining. It assumes that an order is defined over a small number of scoring functions and does not support the orders that are needed to deal with problems such as relevant pattern mining and free itemset mining. The setup of Soulet et al. however fits nicely in the dominance programming framework.
The methodology presented in this paper has clear relationships to methodologies developed in the database community. Dominance reporting is a problem also relevant to the database community [29]. Skyline queries have been developed to deal with dominance relationships in databases [30]. Our work is different from traditional skyline queries as it deals with pattern mining problems where a combinatorial search is needed. Our algebraic notation closely resembles that of Codd’s relational algebra [31], but applies this notation in a context where combinatorial search is needed.
VIII. CONCLUSIONS
In this paper, we have observed that dominance relations can be found in many pattern mining settings. Building on this observation, we have proposed an algebra that combines constraints and dominance relations and that can be used to adequately describe a broad range of pattern mining settings. This algebra resembles relational algebras and arguably would be easy to integrate in a database system.
To evaluate expressions in our algebra, we have proposed a methodology based on the constraint programming technology. Despite the gain in generality provided by dominance programming, our system can compete with specialized mining algorithms and even outperform them in some cases. We believe that this is a strong indication that dominance programming uses the right concepts to describe pattern mining tasks.
Because of its explanatory nature, this work leaves a number of open questions:
Query rewriting: Given the close connection between data mining and databases it is natural to wonder whether common query optimization techniques in databases can also be applied to dominance programming.
Advanced data structures for evaluating dominance: Specialized algorithms for dominance reporting [29] could be used transparently by the CP system to improve efficiency. Similarly, optimized data structures for checking the dominance within a set of item sets may be used as well.
Intelligent variable and value ordering: At the moment, our system selects a value ordering that eliminates the need for post-processing. This order may not always be the most efficient to solve the core CSP. Studying the impact of the value ordering on the time required to evaluate dominance programs would also help to improve the efficiency.
Furthermore, this paper has a strong focus on itemset mining. An interesting question is how our algebra can be used to formulate more structured mining tasks such as sequence or graph mining. Furthermore, there is no reason to believe that dominance programming could not be used for other types of problems such as resource allocation problems.
ACKNOWLEDGMENTS
We would like to thank the authors of ACMiner and Aetheris for sending us their code, and the authors of Eclat, LCM, and CP4IM for making their code available online. This work was supported by the European Commission under the project “Inductive Constraint Programming” contract number FP7-284715, by the Research Foundation–Flanders by means of two Postdoc grants and by the project “Principles of Patternset Mining”.
REFERENCES
|
{"Source-Url": "http://inductiveconstraints.eu/sites/default/files/pubs/dominance_programming.pdf", "len_cl100k_base": 12333, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 43557, "total-output-tokens": 15263, "length": "2e13", "weborganizer": {"__label__adult": 0.00037932395935058594, "__label__art_design": 0.0004575252532958984, "__label__crime_law": 0.0005693435668945312, "__label__education_jobs": 0.002288818359375, "__label__entertainment": 0.00010228157043457033, "__label__fashion_beauty": 0.00022864341735839844, "__label__finance_business": 0.0007166862487792969, "__label__food_dining": 0.00042057037353515625, "__label__games": 0.000949382781982422, "__label__hardware": 0.0013647079467773438, "__label__health": 0.000927448272705078, "__label__history": 0.0004930496215820312, "__label__home_hobbies": 0.00027823448181152344, "__label__industrial": 0.0012502670288085938, "__label__literature": 0.00037288665771484375, "__label__politics": 0.0003554821014404297, "__label__religion": 0.0006299018859863281, "__label__science_tech": 0.3447265625, "__label__social_life": 0.00016677379608154297, "__label__software": 0.017608642578125, "__label__software_dev": 0.62451171875, "__label__sports_fitness": 0.00035858154296875, "__label__transportation": 0.0007376670837402344, "__label__travel": 0.0002715587615966797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56603, 0.0156]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56603, 0.52826]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56603, 0.87054]], "google_gemma-3-12b-it_contains_pii": [[0, 5772, false], [5772, 11588, null], [11588, 16761, null], [16761, 22939, null], [22939, 28507, null], [28507, 34345, null], [34345, 40466, null], [40466, 44680, null], [44680, 48913, null], [48913, 56603, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5772, true], [5772, 11588, null], [11588, 16761, null], [16761, 22939, null], [22939, 28507, null], [28507, 34345, null], [34345, 40466, null], [40466, 44680, null], [44680, 48913, null], [48913, 56603, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56603, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56603, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56603, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56603, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56603, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56603, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56603, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56603, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56603, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56603, null]], "pdf_page_numbers": [[0, 5772, 1], [5772, 11588, 2], [11588, 16761, 3], [16761, 22939, 4], [22939, 28507, 5], [28507, 34345, 6], [34345, 40466, 7], [40466, 44680, 8], [44680, 48913, 9], [48913, 56603, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56603, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
60601cefc775204d6840fe0fb53380de345ef7b5
|
[REMOVED]
|
{"len_cl100k_base": 12097, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 43365, "total-output-tokens": 12813, "length": "2e13", "weborganizer": {"__label__adult": 0.0004968643188476562, "__label__art_design": 0.0013513565063476562, "__label__crime_law": 0.0005879402160644531, "__label__education_jobs": 0.003889083862304687, "__label__entertainment": 0.0002808570861816406, "__label__fashion_beauty": 0.00034737586975097656, "__label__finance_business": 0.0009560585021972656, "__label__food_dining": 0.0005626678466796875, "__label__games": 0.0011758804321289062, "__label__hardware": 0.0017690658569335938, "__label__health": 0.0007047653198242188, "__label__history": 0.00042629241943359375, "__label__home_hobbies": 0.00021457672119140625, "__label__industrial": 0.0008745193481445312, "__label__literature": 0.00122833251953125, "__label__politics": 0.0004291534423828125, "__label__religion": 0.0005021095275878906, "__label__science_tech": 0.3212890625, "__label__social_life": 0.0002434253692626953, "__label__software": 0.09991455078125, "__label__software_dev": 0.5615234375, "__label__sports_fitness": 0.0002994537353515625, "__label__transportation": 0.0005011558532714844, "__label__travel": 0.00021791458129882812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53103, 0.04119]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53103, 0.48107]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53103, 0.88237]], "google_gemma-3-12b-it_contains_pii": [[0, 4221, false], [4221, 7621, null], [7621, 13914, null], [13914, 17342, null], [17342, 21628, null], [21628, 27061, null], [27061, 33814, null], [33814, 37371, null], [37371, 44102, null], [44102, 47599, null], [47599, 47599, null], [47599, 53103, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4221, true], [4221, 7621, null], [7621, 13914, null], [13914, 17342, null], [17342, 21628, null], [21628, 27061, null], [27061, 33814, null], [33814, 37371, null], [37371, 44102, null], [44102, 47599, null], [47599, 47599, null], [47599, 53103, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53103, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53103, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53103, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53103, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53103, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53103, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53103, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53103, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53103, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53103, null]], "pdf_page_numbers": [[0, 4221, 1], [4221, 7621, 2], [7621, 13914, 3], [13914, 17342, 4], [17342, 21628, 5], [21628, 27061, 6], [27061, 33814, 7], [33814, 37371, 8], [37371, 44102, 9], [44102, 47599, 10], [47599, 47599, 11], [47599, 53103, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53103, 0.28191]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
080dfc135084d44904ba506c596660565e4c93d7
|
Fight crime. Unravel incidents... one byte at a time.
Interested in learning more?
Check out the list of upcoming events offering "Advanced Incident Response, Threat Hunting, and Digital Forensics (FOR508)" at http://digital-forensics.sans.org
GIAC Reverse Engineering Malware
GREM Practical Assignment
Version 1.0
Malware: A Look at Reverse Engineering
MSRLL.EXE
Lorna J. Hutcheson
Orlando SANS 2004
# TABLE OF CONTENTS
Abstract ........................................................................................................... 4
Part 1: Laboratory Setup .............................................................................. 4
Introduction .................................................................................................. 4
Hardware Setup ......................................................................................... 4
Networking Setup ...................................................................................... 4
Software Resources .................................................................................. 5
VMWare Workstation 4.5.1 ................................................................. 5
Linux Redhat ......................................................................................... 5
Windows 98 ........................................................................................... 5
Windows 2000 ....................................................................................... 5
PEInfo .................................................................................................... 5
Ollydbg .................................................................................................. 5
Ethereal ................................................................................................. 6
SNORT ................................................................................................. 6
Filemon .................................................................................................. 6
RegMon ................................................................................................. 6
TDIMon ................................................................................................. 6
LordPE .................................................................................................. 6
Notepad .................................................................................................. 7
Regshot ................................................................................................. 7
Netcat ................................................................................................. 7
Process Explorer .................................................................................. 7
PSkill .................................................................................................... 7
MD5sum ............................................................................................... 7
Part 2: Properties of the Malware Specimen ............................................ 8
Type of File ............................................................................................... 8
Size of the File ......................................................................................... 8
MD5 Hash of the File ............................................................................... 8
Operating System it Runs on ..................................................................... 8
Strings Embedded into it ......................................................................... 8
Part 3: Behavioral Analysis ....................................................................... 14
Behavior Before Code Analysis ............................................................. 14
Monitoring of File System Access ..................................................... 14
Monitoring registry/configuration Access ............................................ 14
Monitoring/Redirecting Network Connections .................................... 15
Monitoring Processes on the System ............................................... 15
Behavior After Code Analysis ............................................................... 16
The “Bot Army” ................................................................................. 18
Part 4: Code Analysis .................................................................................. 19
Unpacking/Unencrypting ....................................................................... 19
Program Code Disassembly ................................................................... 20
Debugging ............................................................................................... 20
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Part 5: Analysis Wrap-Up</td>
<td>23</td>
</tr>
<tr>
<td>Citation of Sources</td>
<td>24</td>
</tr>
<tr>
<td>Sites for Tools</td>
<td>24</td>
</tr>
</tbody>
</table>
Abstract
This practical will cover the reverse engineering of a malicious piece of code given to us to analyze. It will use the procedures taught in class and will follow the outline of the Table of Contents listed above.
Part 1: Laboratory Setup
Introduction
The laboratory environment was set up to safely analyze an unknown and potentially malicious piece of code called msrll.exe. In order to do so, the environment needed to be capable of exploring the full capabilities of the code, without allowing it to be released into the wild and potentially cause damage to systems. To facilitate this, the following laboratory configurations were designed to allow flexibility and ensure confinement of the unknown piece of software.
Hardware Setup
Only one computer was used as the test machine. This box is running a fully patched Windows XP Home edition and has a 2.70 GHz processor with one Gigabyte of memory. The unknown code was transferred to the box via a USB thumb drive. This was done to bypass the antivirus software running on the base system and allowed it to be copied directly to the VM image. Also available were CDROM and floppy drive capabilities.
Networking Setup
The base system was configured with Internet access and the Internet Connection Firewall turned on. There is a second Linksys firewall/router controlling access out of the live network. The box is running Norton Antivirus with the latest definitions. During testing, all ports to the internet were denied by blocking the box at the Linksys firewall/router. Once it was determined the ports in use I created a rule in the Linksys firewall/router to allow access on port 80 and port 443. This was to ensure that the malicious code could not get to the internet incase of a mishap during testing.
In order to create the closed network test environment, VMWare Workstation software was used to create the test network. Three images were used for testing: an unpatched Windows 2000; an unpatched Windows 98 and the Linux Redhat image given to us during the REM track. These images were configured in host only mode on bootup to disallow access outside of the VM environment to ensure containment of the malware. The boxes were configured using DHCP and network connectivity was checked between the boxes by ensuring that you could ping each system from the Windows 2000 image.
Software Resources
VMWare Workstation 4.5.1
The key part of the testing was the ability to use VMWare. The software provides the ability to run a completely isolated network with multiple operating systems all on one box. It reduces cost of testing by reducing the need for multiple test boxes and networking hardware. This can be found at http://www.vmware.com/.
Linux Redhat
The image was given during the Reverse Engineering Malware course at a SANS conference. It was used as one of the test operating systems and provided a box for the malicious code to attempt to connect to via an IRC channel since it already had an IRC server loaded on it. Linux can be obtained free from http://www.linux.org/.
Windows 98
The image was an unpatched Windows 98 operating system used as a secondary box on which to launch the malicious code. This was done for two reasons: one was to determine the effects of how the code interacted with multiple systems infected with the same thing in an enclosed environment and second to observe any differences on different operating platforms.
Windows 2000
The image was an unpatched Windows 2000 operating system and was used as the primary system to launch the code on and conduct the analysis on. On this image was also loaded all the tools necessary to conduct our analysis.
PEInfo
PEInfo is a tool developed by Tom Liston and allows for the breakdown of an executable to examine the PE header information and the structure of the windows executable. However, when dealing with a packed file, it first has to be unpacked before it can be of much use. The primary use of this tool was to analyze the strings of the file once it was unpacked. This is done by dragging the file over the window of PEInfo and dropping it. I obtained this tool from Tom Liston.
Ollydbg
This is a great tool and it’s free. Ollydbg is a disassembler with great functionality and it is relatively easy to use. Also downloaded was OllyDump which is a plugin that will allow you to dump code from memory. This tool was used to obtain a clean, unpacked version of the malicious code. Many scripts are available for Ollydbg that will allow you to find the Operational Entry Point (OEP) of the code in assembly. However, in the reality that one of these might not be available and the desire to learn how to find it manually, I enlisted the help of Tom Liston and his great programming skills. His techniques for finding the
OEP manually through Ollydbg will be described later in detail. This can be found at http://home.t-online.de/home/Ollydbg/.
**Ethereal**
Many sniffers exist, but Ethereal is my favorite. It is easy to use and allows you to look at the information in a concise fashion or provide an indepth look at the packets. This was used to sniff the network traffic on the Windows boxes. A requirement to run any sniffer is winpcap which allows the interface to go into promiscuous mode. The program as well as the winpcap drivers can be found at http://www.ethereal.com/.
**SNORT**
SNORT was also used as a sniffer. Since the Linux image already had snort installed, I used it to monitor traffic coming to the Linux box. It can be found at http://www.snort.org/.
**Filemon**
This a great tool provided free by Sysinternals and allows you to monitor changes as well as access to the file system. This tool was used when msrll.exe was launched to monitor what activity was taking place on the file system. This can be found at http://www.sysinternals.com/ntw2k/source/filemon.shtml.
**RegMon**
This tool is also provided free by Sysinternals and allows you to monitor registry accesses and changes. This tool was used when msrll.exe was launched to monitor what it was activity was taking place in the registry. This can be found at http://www.sysinternals.com/ntw2k/source/regmon.shtml.
**TDIMon**
TDIMon is a tool that allows you to monitor the TCP and UDP activity on the system. TDI stands for Transport Driver Interface which is exactly what it is monitoring. This was used to help determine what activity msrll.exe might be doing. This can be found at http://www.sysinternals.com/ntw2k/freeware/tdimon.shtml.
**LordPE**
LordPE is a tool that was used during the analysis without useful results. The tool is intended to allow a look at processes that are running and how it interacts. It also allows to you dump a process from memory. This was used after launching msrll.exe to attempt to get an unpacked version of the code. However, the code that was obtained was still not readable so another method was used which was ollydbg. It can be found at http://mitglied.lycos.de/yoda2k/LordPE/info.htm.
**Notepad**
Notepad is a built-in text editor that comes with Windows systems. It is used as a method to safely view files without launching them. This was used in many different forms throughout the analysis to look at files and view output from tools.
**Regshot**
Regshot is a great tool that allows you to do a before and after picture of the registry settings. This was run before msrll.exe was launched and again immediately afterwards to help determine the modifications to the registry. The homepage for Regshot is [http://regshot.ist.md/](http://regshot.ist.md/) however it was unavailable the last time I checked. It can be found easy by a simple Google query.
**Netcat**
This tool has been often called a “Swiss Army Knife” because of its many capabilities. You can use it to set up a listener on any port to allow connections to it or you can transfer files using it. It is very flexible and will be used to simulate any needed listening ports that the malware might want to connect to. This can be found at [http://netcat.sourceforge.net](http://netcat.sourceforge.net).
**Process Explorer**
This is a tool from Sysinternals that allows you to monitor the processes running on the system as well as any handles they might have. You can also view all the information about the process such as the command line that calls it, security settings etc. This will be used to help monitor what the malware is doing. This can be found at [http://www.sysinternals.com/ntw2k/freeware/proexp.shtml](http://www.sysinternals.com/ntw2k/freeware/proexp.shtml).
**PSkill**
This tool is a command line tool that will allow you to terminate a process by typing “pskill <PID>”. This will be used to quickly kill a process if needed. This can be found at [http://www.sysinternals.com/ntw2k/freeware/pskill.shtml](http://www.sysinternals.com/ntw2k/freeware/pskill.shtml).
**MD5sum**
The tool is a command line executable that allows you generate an MD5 hash of a file. This will be used to hash the malware in question before and after it is launched to see if it is modifying itself or if it is the same as the original. The tool can be found at [http://www.gnu.org/software/textutils/textutils.html](http://www.gnu.org/software/textutils/textutils.html).
Part 2: Properties of the Malware Specimen
Type of File
The malware file is a compressed executable that has been compressed with ASPack. This was determined by looking at the file in PEInfo. It was easy to look at the Sections area and find how the file was compressed as .aspack is listed. This is key to know because the file is unreadable for the strings in its current state and will require the file to be unpacked.
Size of the File
The file itself in its packed state is 41,984 bytes (41 KB). This was determined by looking at the file in PEInfo. If you click on the filename itself at the top of PEInfo, it will show you the file size. This was futher verified by right clicking on the file and looking at the properties of it using Explorer. After the file was unpacked via Ollydbg and following the same procedure as described above, the file size was 1,182,720 bytes (1.12 MB)
MD5 Hash of the File
The MD5 hash of the file in its packed state is 84acfe96a98590813413122c12c11aaa. This was determined by using a command line tool called md5sum.exe. This tool is used by issuing the following command at the commandline “md5sum.exe msrll.exe”. I placed MD5sum.exe in the directory where the malware was located to avoid modifying the path. The MD5 hash of the file was also taken after it was launched with the following results:
\84acfe96a98590813413122c12c11aaa *C:\WINNT\system32\msrll.exe.
This was done to ensure no modification of the file occurred after it was launched.
Operating System it Runs on
The malware is a Windows based executable. This was determined by using PEInfo, which breaks down the file structure of Windows based executables.
Strings Embedded into it
The strings embedded into it are not visible via PEInfo in its packed state. However, once the file was unpacked using Ollydbg, the strings were readily available using PEInfo and there were lots of strings. Here are the strings found:
<table>
<thead>
<tr>
<th>!This program cannot be run in DOS mode</th>
<th>msg</th>
<th>GetExitCodeProcess</th>
<th>StartServiceCtrlDispatcherA</th>
</tr>
</thead>
<tbody>
<tr>
<td>.data</td>
<td>.kb</td>
<td>GetFileSize</td>
<td>kernel32.dll</td>
</tr>
<tr>
<td>.aspack</td>
<td>.sklist</td>
<td>GetFullPathNameA</td>
<td>AddAtomA</td>
</tr>
<tr>
<td>.adata</td>
<td>.umset</td>
<td>GetLastError</td>
<td>CloseHandle</td>
</tr>
<tr>
<td>.newIID</td>
<td>.uattr</td>
<td>GetModuleFileANameA</td>
<td>CopyFileA</td>
</tr>
<tr>
<td>?insmod</td>
<td>.dccsk</td>
<td>GetModuleHandleA</td>
<td>CreateDirectoryA</td>
</tr>
<tr>
<td>?rmmod</td>
<td>.con</td>
<td>GetProcAddress</td>
<td>CreateFileA</td>
</tr>
</tbody>
</table>
Part 2: Properties of the Malware Specimen
Page 8 of 24
Part 2: Properties of the Malware Specimen
Page 9 of 24
Part 2: Properties of the Malware Specimen
Page 10 of 24
Part 2: Properties of the Malware Specimen
Page 11 of 24
leaves %s
unable to kill %s (%u)
CreateProcessA
inet_ntoa
:0 ** %s
%s killed (pid:%u)
CreateToolhelp32Snapshot ioctlsocket
joins: %s
AVICAP32.dll DeleteFileA
ACCEPT unable to kill %s (%u)
DuplicateHandle select
resume pid %u killed
RtlEnterCriticalSection sendto
err: %u
error!
ExitProcess setsockopt
DCC ACCEPT %s %s %s ran ok
ExitThread shutdown
dcc_resume: cant find port %s
MODE %s +o %s
CreateToolhelp32Snapshot
DCC RESUME %s %u Could not copy %s to %s
FreeLibrary
%a %s copied to %s
GetAtomNameA
%ssl 0123456789abcdef GetCommandLineA
%clone %s unset GetCurrentDirectoryA
%clones unable to unset %s GetCurrentProcess
?login (%s) %s GetCurrentThreadId
?uptime libssl32.dll GetExitCodeProcess
?reboot libray32.dll GetFileSize
?status <die<join>[part]raw[msg]>
GetFullPathNameA
?jump AdjustTokenPrivileges GetLastError
?nick CloseServiceHandle GetModuleFileNameA
?echo CreateServiceA GetModuleHandleA
?hush CryptoAcquireContextA The procedure entry point %s could not be located in the dynamic link library %s
?wget CryptoGenRandom The ordinal %u could not be located in the dynamic link library %s
?join CryptoReleaseContext (08@P`p
?op GetUserNameA kernel32.dll
?op LookupPrivilegeValueA GetProcAddress
?akick OpenProcess Token GetModuleHandleA
?part OpenSCManagerA LoadLibraryA
?dump RegCloseKey advapi32.dll
?set RegCreateKeyExA msvcrtd.dll
?die RegSetValueExA msvcrtd.dll
?md5p RegisterServiceCtrlHandlerA shell32.dll
?free SetServiceStatus user32.dll
?raw StartServiceCtrlDispatcherA version.dll
?update AddAtomA wininet.dll
?hostname CloseHandle ws2_32.dll
?fif CopyFileA AdjustTokenPrivileges
?fif CreateFileA autoplay
?ff CreateDirectoryA getmainargs
?del CreateMutexA ShellExecuteA
?pwd CreateMutexA DispatchMessageA
?play CreatePipe GetFileVersionInfoA
?copy CreateProcessA InternetCloseHandle
?move CreateToolhelp32Snapshot WSAGetLastError
?dir DeleteFileA advapi32.dll
?sums DuplicateHandle AdjustTokenPrivileges
?ls EnterCriticalSection CloseServiceHandle
?cd ExitProcess CreateServiceA
?mdir FileTimeToSystemTime CryptoAcquireContextA
?mkd dir FindAtomA CryptoGenRandom
?run FindClose CryptoReleaseContext
?exec FindFirstFileA GetUserNameA
?ps FindNextFileA LookupPrivilegeValueA
?kill FreeLibrary OpenProcessToken
?killall GetAtomNameA OpenSCManagerA
?crash GetCommandLineA RegCloseKey
?dcc GetCurrentDirectoryA RegCreateKeyExA
?get GetCurrentProcess RegSetValueEXA
?say GetCurrentThreadId RegisterServiceCtrlHandlerA
?set ServiceCtrlDispatcherA SetServiceStatus
Part 2: Properties of the Malware Specimen
Page 12 of 24
The first step in starting to analyze the malware was to look at the strings for any clues about the capabilities of the malware and what it might do. One of the first things that popped out were references to “IRC” in some of the strings. So my initial assumption was it was an IRC bot of some sort. In light of this, I looked for strings that might be used as IRC commands. I found several that were proceeded by a ? and looked like good candidates such as “?login”, “?uptime”, “?join”, “?die” etc. Other strings that caught my attention were things like “irc.pass” and “dcc.pass” as possible ways for IRC password validation. The string “bot.port” seemed to indicate either a port that might be listening on the infected machine or that you could use it to specify the port that bot was connecting on so it was something to look at later. I also noticed the strings “Ciphers built-in” and “Hashes built-in” so I figured I would have to look for their usage and might make password identification difficult. “Mozilla/4.0” and strings such as “InternetCloseHandle” and “InternetOpenUrlA” might mean web server of some sort was being used. Also found were references to one server (collective7.zxy0.com) but two different ports were specified: 9999 and 8080. The other didn’t specify a port, so it looked like it might be a webserver connection. There were two strings that seemed similar that I found that I thought might be useful “$1$KZLPLKdf$W8kl8Jr1X8DOH$Zsml$p9qq0” and “$1$KZLPLKdf$55isA1ITvamR7bjAdBzX”. I did not know what they were for, but I wanted to explore them further. Also mentioned were “Ping” “UDP” “Smurf”, “Jolt2”, and “SYN”. This was evident that it possessed to capability to do denial of service (DOS) attacks.
Part 3: Behavioral Analysis
Behavior Before Code Analysis
In preparation of launching of msrll.exe on the Windows 2000 image, there were several tools set up in advance to monitor the behavior of it. Filemon, Regmon and TDImon were all launched, the captures paused and the contents cleared. This was to attempt to capture only what was being used by the msrll.exe. Also launched were LordPE to monitor the processes and RegShot to keep track of the registry. Ethereal was also run to sniff traffic coming from the system. The malware was then executed in the controlled environment described above. The tools and their output are described below in the analysis of the malware, however it is important to note that only the key pieces of the analysis are listed. Each of these tools generate a lot of data that has to be analyzed to determine what is relevant in understanding the malware.
Monitoring of File System Access
Filemon showed some very key things that are listed and discussed below.
We see that the malware created a directory named mfm in the system32 folder
177 3:08:45 PM msrll.exe:252CREATE C:\WINNT\system32\mfm SUCCESS Options: Create Directory Access: All
Next the malware copied itself to the directory, which was verified via an MD5 hash to be the same as the orginal file.
224 3:08:45 PM msrll.exe:252CREATE C:\WINNT\system32\mfm\msrll.exe SUCCESS Options: OverwriteIf Sequential Access: AllMonitoring
Then we see the malware creates a file called jtram.conf
869 3:09:04 PM msrll.exe:1044 CREATE C:\WINNT\system32\mfm\jtram.conf SUCCESS Options: OverwriteIf Access: All
Monitoring registry/configuration Access
For the registry I was looking for things created and/or modified that might be something that tells us what and how its doing it. Here are some of the interesting findings from Regmon and RegShot.
Regmon shows us that it is putting its self in to run as a service.
1462 101.79209788 SERVICES.EXE:212 CreateKey HKLM\System\CurrentControlSet\Services\mfm SUCCESS Key: 0xE13704E0
Regmon also shows that several keys were created dealing with Cryptograph such as the one below. Probably to ensure that it had the crypto capabilities that it was wanting.
```
249 100.15181874 Fileomon.exe:680 CreateKey
HKLM\SOFTWARE\Microsoft\Cryptography\RNG SUCCESS Key: 0xE1D14FE0
```
RegShot showed us more about the service that it set itself up to run as and the name of the service was “Rll enhanced drive.”
```
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\mfm\ImagePath:
"C:\WINNT\system32\mfm\msrll.exe"
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\mfm\DisplayName:
"Rll enhanced drive"
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\mfm\ObjectName:
"LocalSystem"
```
**Monitoring/Redirecting Network Connections**
Before launching the program, I used Ethereal to start a packet capture to see if it was doing anything. One of the things noted after it was run were the DNS queries to the collective7.zxy0.com that we noted in the strings. Since it was looking for them, I modified the host file to point to my VMimage of Linux that I was going to use as the server. As soon as it got the DNS name resolved, it attempted connections to the server on ports 6667, 9999 and 8080. These connections were sent a RST since I have nothing listening on them yet.
In order to give it something to connect to, I used the VMimage of Linux that was given to us in class. On it is an IRC server that was ready to be used. Before starting the IRC server, I fired up Snort since it was already on the Linux box to capture the data that was being sent to us. Now, I launched the IRC server and waited. The packets showed connection attempts to a channel called #mils. So on the IRC channel I joined the IRC channel and very shortly a “user” popped up in it with me. The bot was now connected. The nickname used was not one that was readable and subsequent connections while analyzing the bot showed that it was a randomly generated nickname. I decided to also set up a netcat listener on each of the other ports and see what would happen. The bot connects to the netcat listener in the same fashion it appears that they connected to the IRC channel. Once a connection is established and the bot logins in there are no more attempts to connect to the other ports. The final thing I attempted was to issue commands to the bot from the IRC server that I was logged into. On the Linux image, I attempted all of the commands that I found listed and attempted to use them to get the bot to respond. I was unsuccessful in my attempts.
**Monitoring Processes on the System**
I used Task Manager to check the processes running on the system and I found that msrll.exe was indeed running now as a process. I tried to kill it from the process list and found that I was denied. I then went to look at the services and...
found I could not kill it from there either. I opened Sysinternals Process Explorer and found it there. I was able to kill it using Process Explorer or the command line tool PSkill also from Sysinternals.
The next thing I wanted to know, was what ports were being used by msrl.exe. I used Process Explorer again to find the msrl.exe process. Once selected I right clicked to look at the properties. From there, I could see the connection to the IRC server, but I also saw that it was listening on port 2200 as well. To watch what was happening I launched Ethereal again and since Mozilla was mentioned, I used it and attempted to connect to the bot on port 2200. I was unsuccessful and kept getting RSTs from the bot host. I then attempted to pass it things from the strings that looked like it could be connected to such as http://192.168.6.129:2200/jtr.home and http://192.168.6.129:2200/jtr.bin. I got nothing back on the first one, but on the second one I got a file that I downloaded. The file contained a “#.”. I tried connecting the same way via SSL since it was mentioned and I received a connection to port 2200. The packet captures showed that I was sent a “#.” in the data, but nothing appeared on my screen. I tried sending commands, but nothing would leave my browser based on packet captures. I then tried a telnet session since it appeared to attempt to shovel me a prompt with the “#.”. I was rewarded with a “#.” on my screen but after typing “?login testuser” and “?pwd testpwd” I was disconnected. So now I new I had three avenues to attempt to figure out how to get in: the IRC channel, port 2200 via telnet and using Mozilla.
**Behavior After Code Analysis**
Once I was able to bypass the authentication (see the code analysis section below) and talk to the bot directly, I attempted the commands to see what I was able to do with them. Of the three ways that I tried to get the bot to respond, the only one that was successful was via telnet. Here are the results of what I found for the commands that could be used and what I observed.
<table>
<thead>
<tr>
<th>COMMAND</th>
<th>RESULTS</th>
</tr>
</thead>
<tbody>
<tr>
<td>?si</td>
<td>This command gives the system information about the computer that you connect to via telnet. “WIN2k (u:Administrator mem\176\255) 30% GenuineIntel Intel(R) Celeron(R) CPU 2.70 GHz”</td>
</tr>
<tr>
<td>?ssl</td>
<td>This returned “?ssl: -1” I believe that this tells you whether SSL is being used or not.</td>
</tr>
<tr>
<td>?clone</td>
<td>It showed the following: “usage ?clone: server:[port] amount”. I tested this by typing “?clone 192.168.6.129:2200 1kb and received ***bot.port: connect from 192.168.6.129”</td>
</tr>
<tr>
<td>?clones</td>
<td>This showed usage of “[NETWORK</td>
</tr>
<tr>
<td>?login</td>
<td>?login requires username then <ENTER> and password then <ENTER></td>
</tr>
<tr>
<td>?uptime</td>
<td>Shows how long the system has been up and how long the bot has been up in h/m/s</td>
</tr>
<tr>
<td>?reboot</td>
<td>This command reboots the system you are connected to and responds with “later!”</td>
</tr>
<tr>
<td>?status</td>
<td>Shows information about the computer the bot is on: “service:Y user:SYSTEM inet connection:Y contype:Ln reboot privs:Y”</td>
</tr>
<tr>
<td>?jump</td>
<td>I got no response from this command</td>
</tr>
<tr>
<td>Command</td>
<td>Description</td>
</tr>
<tr>
<td>---------</td>
<td>-------------</td>
</tr>
<tr>
<td>?nick</td>
<td>This tells me to “Set an irc sock to perform ?nick command on Type .sklist to view current sockets, then .dccsk <#>” The .sklist shows me the IRC channel and everyone in it and then it shows me just the irc server I am on. If you type .dccsk and then the number of the irc socket you get “using sock #1 collective7zy0.com:667 (XmCMYzbzH, which is the current nick of the bot that is logged in on the channel from the machine I am connected to port 2200 on.”</td>
</tr>
<tr>
<td>?echo</td>
<td>In watching packet captures, this appears to echo whatever was typed to the bot</td>
</tr>
<tr>
<td>?hush</td>
<td>I got no response to hush</td>
</tr>
<tr>
<td>?wget</td>
<td>This command gets you a file from the system you are connected to</td>
</tr>
<tr>
<td>?join</td>
<td>?join functioned in the same way as ?nick above</td>
</tr>
<tr>
<td>?op</td>
<td>I got “bad args” with this one, but I don’t know what they are looking for here.</td>
</tr>
<tr>
<td>?aop</td>
<td>No response</td>
</tr>
<tr>
<td>?akick</td>
<td>No response</td>
</tr>
<tr>
<td>?part</td>
<td>No response</td>
</tr>
<tr>
<td>?dump</td>
<td>No response</td>
</tr>
<tr>
<td>?set</td>
<td>This command lets you set values of jtr.bin, jtr.home, bot.port, jtr.id, irc.quit, servers and their ports, irc.chan, pass (a hash, but not sure what the password is for) and dcc.pass which is the password of the login to the bot.port. The password can be changed using this command.</td>
</tr>
<tr>
<td>?die</td>
<td>Killed all windows that I had open on the Linux box that was acting as my IRC server</td>
</tr>
<tr>
<td>?md5p</td>
<td>Takes the parameters <pass> <salt> and returns $1$SALT$Password Hash</td>
</tr>
<tr>
<td>?free</td>
<td>Takes the value <?cmd> and releases it from use. Tested with passing it ?pwd and ?pwd no longer returned results even after quitting the telnet session and relogging in. I had to terminate the msrl.exe and restart it to get the functionality back.</td>
</tr>
<tr>
<td>?raw</td>
<td>?raw functioned in the same way as ?nick above</td>
</tr>
<tr>
<td>?update</td>
<td>Takes the parameters <url> <id>, but I couldn’t find how to use it.</td>
</tr>
<tr>
<td>?hostname</td>
<td>Returns the name of the computer and the IP address</td>
</tr>
<tr>
<td>?ifif</td>
<td>No response</td>
</tr>
<tr>
<td>?del</td>
<td>Deletes the name of a file and you can specify the path</td>
</tr>
<tr>
<td>?pwd</td>
<td>Tells you the current directory that you are in</td>
</tr>
<tr>
<td>?play</td>
<td>Shows you the contents of the file specified</td>
</tr>
<tr>
<td>?copy</td>
<td>Copies the file specified to another location and to the name specified</td>
</tr>
<tr>
<td>?move</td>
<td>Moves the file specified to the location of choice and to the name specified</td>
</tr>
<tr>
<td>?dir</td>
<td>Shows the directories and files of the directory that is your present working directory</td>
</tr>
<tr>
<td>?sums</td>
<td>Gives you the md5 hash of the files and their versions if known</td>
</tr>
<tr>
<td>?ls</td>
<td>Same as ?dir, Unix based command</td>
</tr>
<tr>
<td>?cd</td>
<td>Changes directories and uses 8.3 filename convention.</td>
</tr>
<tr>
<td>?rmdir</td>
<td>Deletes the directory specified and the path can be included</td>
</tr>
<tr>
<td>?mkdir</td>
<td>Creates a directory where specified</td>
</tr>
<tr>
<td>?run</td>
<td>Runs the executable specified, but you have to give the path to it. It shows in the processes as running, but not on the screen. Indicates OK for success</td>
</tr>
<tr>
<td>?exec</td>
<td>Same as run above, but does not indicate whether it successfully completed as ?run does</td>
</tr>
<tr>
<td>?ps</td>
<td>Lists all the processes running and their Process ID</td>
</tr>
<tr>
<td>?kill</td>
<td>Kills the specified process by ?kill <PID> Reports success or failure</td>
</tr>
<tr>
<td>?msg</td>
<td>?msg functioned in the same way as ?nick above</td>
</tr>
<tr>
<td>?kb</td>
<td>?kb functioned in the same way as ?nick above</td>
</tr>
<tr>
<td>?sklist</td>
<td>Shows the information about the current sockets</td>
</tr>
<tr>
<td>?unset</td>
<td>Stopped any commands from displaying information, although Snort showed that the correct information was being returned for the commands run such as ?set.</td>
</tr>
<tr>
<td>?uattr</td>
<td>?uattr functioned in the same way as ?nick above</td>
</tr>
<tr>
<td>?dccsk</td>
<td>Used to connect to a socket that is specified by the socket number that can be found with ?sklist</td>
</tr>
</tbody>
</table>
Part 3: Behavioral Analysis
Page 17 of 24
Unsure of its use, but when used with a file namespecified such as notepad I received the following: ***chdir: C:\winnt\system32\mfm -> C:\winnt\system32\mfm (0)
<table>
<thead>
<tr>
<th>Command</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>?con</td>
<td>Unsure of its use, but when used with a file name specified such as notepad I received the following: ***chdir: C:\winnt\system32\mfm -> C:\winnt\system32\mfm (0)</td>
</tr>
<tr>
<td>?killsk</td>
<td>Said it couldn’t kill the socket and specified a socket number</td>
</tr>
<tr>
<td>?insmod</td>
<td>Installs loadable modules</td>
</tr>
<tr>
<td>?rmmod</td>
<td>Removes Loadable modules</td>
</tr>
<tr>
<td>?lsmod</td>
<td>Lists loadable modules</td>
</tr>
</tbody>
</table>
**The “Bot Army”**
I infected a windows 98 box so that I now had a bot “army” to play with. I tried to get the commands to work that seemed to allow the access to all of the bots such as ?clones. I was unable to get my bot army” to respond to this command. I also tried to figure out how to get it to launch a DDOS attack that was referenced, but I was unable to do this as well.
Part 4: Code Analysis
Unpacking/Unencrypting
The first attempt to get the dumped code was with LordPE. However, I was not able to obtain an unpacked copy of the process that was decrypted. There are several tools that will dump an aspacked executable; however, I chose not to use them as the results can produce modified results and I wanted to make sure I had a clean copy. In order to get the unpacked code so that we can look at it, we are going to use Ollydbg to dump the code once it is unpacked in memory. The steps below will describe how to get the unpacked code from memory. The description in class showed an UPX packed file and used Ollydbg to dump it from memory. However, it did not discuss how to find the Original Entry Point (OEP) and the process to dump any type of file. It only showed what the code in Assembly would look like for the end of an UPX packed file. There are scripts that will do this for you for any type of packed file such as those found at http://ollyscript.apsvans.com/. However, I wanted to learn the manual method of how it worked and with help from Tom Liston, he showed how to do this for any type file and what to look for. I will describe the process below that will work to find the OEP and allow you to dump the contents of msrll.exe.
Before firing up Ollydbg, make sure that Ollydump has been downloaded and placed in the Ollydbg directory. This is a free plugin that can be found at http://dd.x-eye.net/file/. To get started, first open Ollydbg and load msrll.exe. Now we want to press ALT M to open a memory map of the file. We are looking for the PE header information so that we can find the location in memory of the base image. Once you find the first PE header, click on it and a new window will popup. In the new window scroll down until you find “image base” in the right column. We find the image base is at 400000 and this was recorded for use later. Next, return to the CPU main thread module and you should be at the first command “PUSHAD”. At this point we are going to push F8 to set over the entire subroutine. We want to be at the end of it, not stepping into it. We arrive at a call to msrll.0051d00a. We want to follow this in a dump, but we want to follow it in memory so that we can see the code in memory of this call. Right click over the ESP register and select “Follow in Dump”. Now in the bottom left pane, highlight the first four bytes by holding the left mouse button down and selecting them. We only want the first four bytes before the next jump since this is what was pointed to in that call. Now right click and select breakpoint, hardware on access, dword. A hardware breakpoint does not modify the memory contents. Now comes the fun part, as we are going to run the code up to that point by pressing F9. We should be at a command “jnzh short msrll.0051d3ba” in the main cpu thread window. We will press F8 until we hit a “RETN” command, which should be the end of the decompression function. Now we will step INTO it by pressing F7. We are now at the OEP which is very key. Write down the address of the OEP which was 401240 and press CTRL A which will analyze the code. You should now see code. To dump the code, we need that first number we
wrote down of the image base. To find the memory location of the code, you use this formula: OEP – Base Image. In our case its 1240 so now we select plugins from the top of the Ollydbg main window and then choose Ollydump and Dump debugged Process. You put 1240 in the modify box, and then select the “rebuild import” box and dump the code. Now you will have a dumped code that is unpacked and readable for later use.
**Program Code Disassembly**
Once the bot was connected, I wanted to find out as much as I could about what it was doing. This was done by attaching to the process using Ollydbg on the W2K image and doing the same procedures as I did above to dump the code into readable form. After this, I decided to use breakpoints at key places to try to determine which parts of the code are controlling what. It will be impossible to discuss all of them, since I set so many different ones. However, there are a few that will be discussed. My ultimate goal was to learn how the Bot was logging in to the IRC channel and to be able to control it. After this, I will discuss what I saw found from the code. Here are the techniques used to figure out where to set breakpoints.
Ollydbg has great functionality and flexibility. One feature is to right click on the Main CPU window (upper left window) and select “Search for” then choose “All referenced Text Strings”. A wonderful window will appear that shows you all of the readable text strings and where they are used in the code. From here you can select your breakpoints by highlighting the item and pressing F2. To view your break points and turn them on and off very quickly, use ALT-B to bring up a window that shows them all to you. As a side note, to turn off your hardware breakpoints that you set, you need to use the main tool bar and under DEBUG select “hardware breakpoints”. Another way of finding good places to set breakpoints are the commands that do comparisons such as “strncmp” and “strcmcp”. If you select the procedure that does these, then hit CTRL-R you will bring up a window that shows you everytime that command is called and from where. You can then set breakpoints on these commands to see when they are used.
**Debugging**
To find where the password was being used, I set breakpoints on all commands referencing anything looking like it was associated with a password such as “dcc.pass” and “irc.pass”. I then attempted to launch commands from the IRC window and started by stepping through the code line by line from the breakpoint and watching what was happening. However, after hours of looking at the code, I realized that I was not seeing a password and could not figure out how irc.pass was working, which appeared to control the password to the IRC channel. I then attempted to login via the telnet session using the commands “?login testlogin” and “?pwd testpwd” (it was later determined that ?pwd was not for the password,
but rather for the “present working directory”) which kept triggering the dcc.pass and then I would end up hitting these two lines of:
0040BC6A PUSH msrll.0040BB49 ASCII "bot.port"
0040BC6F PUSH msrll.0040BB52 ASCII "%s bad pass from "%s"@%s"
I knew I had found the login code for port 2200. During the process, I found one of the hashes appear that were mentioned in the strings earlier and being passed as a value: "$1$KZLPLKDF$55isA1TvamR7bjAdBziX". In watching the code be parsed, I saw the value passed and it was broke down by the $. $1$ was always a constant value passed, then the KZLPLKDF and finally the last letters. These were concatenated together. Later I realized that the $?login required a username (appears to be ignored) to be entered and then a password to be entered. The above string is the hash of the password that expected. I tested three passwords and set a breakpoint at Address 0040D655 which is the final compare of the password inputted and the actual password. To do this I used different usernames with each of the passwords. Every password had its own unique hash regardless of the username that I used. Since I was only seeing the hash, which was determined to be stored encrypted in used I decided to see if I could bypass the authentication by modifying the registers. I set a breakpoint at 0040BBD9 PUSH msrll.0040BB40 ASCII "dcc.pass" Here are the key lines of code that will allow you to bypass the authentication.
0040BBD9 PUSH msrll.0040BB40 Arg2 = 0040BB40 ASCII "dcc.pass"
0040BBDE PUSH EDX Arg1
0040BBDF CALL msrll.00405872 msrll.00405872
0040BBE4 ADD ESP,10
0040BBE7 TEST EAX,EAX
0040BBE9 JE SHORT msrll.0040BC5A
If you read what this is doing, you see its passing “dcc.pass” as a variable to the Call to msrll.00405872 procedure. Then it adjusts the stack size with the “ADD ESP,10” and finally the key to the puzzle is its checking EAX. “TEST EAX, EAX” tell you to check the value of EAX and see if its 0 or 1 (False or True). If its True, it takes the jump and you end up with a bad password, but if its false you get authenticated. So, I forced it to bypass the jump by modifying the value of EAX to be 1 (right click and then Increment). After pressing F9, I returned to my telnet session and sure enough I was still running. To see if I was authenticated, I typed ?si and got the information of the system. I was finally in. Once authenticated I tested the ?md5p and got the following results “?md5p <pass> <salt>”. So I put in a password of malware and a salt of test and got the following results: "$1$test$02Ctxuyv0OHiS01Hx6hS1". I know now that KZLPLKDF is the salt being used, but I still had no way of determining the password that was in use.
Another important file was created and used by the malware was jtram.conf. To find out what this file was used for I set breakpoints in the code that referenced the file. This file was written to on occasion, six lines at a time, usually after
several login and log outs by the bot due to me pausing the code and stepping through it. However, it was written to after using the ?set command for the passwords. At these times it appears that the passwords generated for the bot was written to it. However, the hashes do not match those passed as arguments so my guess is that they are the password hashes that have been encrypted and written to the file. From all indications, this file is read at initial startup and then not again. I believe this to be the case because I never saw it read during any login attempts.
Part 5: Analysis Wrap-Up
The malware that we were asked to analyze is an IRC bot. It connects to the IRC server called collective7.zxy0.com on one of three ports: 8080, 9999 and 6667. It also listens on port 2200 which appears to be the command port to talk to the bot directly and I believe to control the bot network, but I could not validate this. Based on observations of the bot and the commands, the owner of the bot network has full control over all of the infected the systems.
The malware has the capability to allow full control over each individual computer that has been infected. The owner of the PC no longer is the “true” owner of the system. Based upon the strings, it allows a connection to the bot via a web interface using Mozilla, but I couldn’t not figure out how to issue commands once connected, it denies connections from Internet Explorer. It has the capability as well to launch the following DOS attacks: JOLT2, Smurf, UDP, SYN and Ping floods, combined with all the Bots participating, the bot network would be able to launch a DDOS attack against a specific target. It appears that the bots can be controlled by specifying a specific network that certain bots are on or all of them participating due to the parameters of the “?clones” command. This program would be used by someone who wanted to have an “army” at their disposal. There are many people who would use this capability such as people out for revenge or someone with a bot Army for hire that wanted to make money to take out a target. It could also be by folks participating in IA warfare and to disrupt Internet connectivity. This could be devasting for our Armed Forces, especially in light of the use of the Internet for daily operations.
In order to help protect against this type of malware several things should be done. Users need to ensure they have antivirus software running and updated as this malware is detected by Norton as Backdoor.IRC.bot. Organizations can do this by using something like Norton’s Coporate Edition to manage the who organization. The firewall, whether at corporate or home, needs to block unneeded ports such as those being used here. If infected, it would stop the bot from connecting. Also, users need to log on with the most restrictive privileges. If they were infected, it appears the bot has the privileges of the logged on user. The system needs to be kept up will all patches to ensure that there is no way to be infected using a known exploit such as the .chm vulnerability. One of the most important things is user training. Users need to learn not to open email from someone they don’t know or to click on attachments they are unsure of. They also need to learn to be careful in downloading software from unknown websites as these can contain the malicious code.
To clean your system of this bot, you need to run your antivirus software and remove all references that it finds to it. Also, you can manually remove it by deleting the registry keys that were mentioned above for the service and under the Run key. The service can be stopped using pskill or Process Explorer, then the directory mfm and its contents can be deleted. This type of bot should be easy to prevent if the steps above are taken.
Citation of Sources
Liston, Tom. Conversation via Jabber on finding the OEP. September 2004
Sites for Tools
VMWare Workstation 4.5.1: http://www.vmware.com/
Linux Redhat: http://www.linux.org/.
Ollydbg: http://home.t-online.de/home/Ollydbg/.
SNORT: http://www.snort.org/.
LordPE: http://mitglied.lycos.de/yoda2k/LordPE/info.htm.
Regshot: http://regshot.ist.md/ however it was unavailable the last time I checked. It can be found easy by a simple Google query.
## Upcoming SANS Forensics Training
<table>
<thead>
<tr>
<th>Event</th>
<th>Location</th>
<th>Dates</th>
<th>Venue</th>
</tr>
</thead>
<tbody>
<tr>
<td>SANS Summer of Cyber</td>
<td>Jul 27</td>
<td>NC</td>
<td>Jul 27, 2020 - Aug 01, 2020</td>
</tr>
<tr>
<td>South by Southeast Asia Online 2020</td>
<td>Singapore</td>
<td>Aug 03, 2020 - Aug 14, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>Instructor-Led Training</td>
<td>Aug 3 MT</td>
<td>CA</td>
<td>Aug 03, 2020 - Aug 08, 2020</td>
</tr>
<tr>
<td>SANS Special Investigations 2020</td>
<td>United Arab Emirates</td>
<td>Aug 03, 2020 - Aug 08, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Reboot - NOVA 2020 - Live Online</td>
<td>Arlington, VA</td>
<td>Aug 10, 2020 - Aug 15, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>FOR498 Australia Live Online 2020</td>
<td>Australia</td>
<td>Aug 10, 2020 - Aug 15, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>Instructor-Led Training</td>
<td>Aug 17 ET</td>
<td>DC</td>
<td>Aug 17, 2020 - Aug 22, 2020</td>
</tr>
<tr>
<td>Cyber Defence APAC Live Online 2020</td>
<td>Singapore</td>
<td>Aug 17, 2020 - Aug 22, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS FOR508 Sydney August 2020</td>
<td>Sydney, Australia</td>
<td>Aug 17, 2020 - Aug 22, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Summer Forensics Europe 2020</td>
<td>United Arab Emirates</td>
<td>Aug 17, 2020 - Aug 28, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>Instructor-Led Training</td>
<td>Aug 24 MT</td>
<td>CA</td>
<td>Aug 24, 2020 - Aug 29, 2020</td>
</tr>
<tr>
<td>SANS Virginia Beach 2020</td>
<td>Virginia Beach, VA</td>
<td>Aug 30, 2020 - Sep 04, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Virginia Beach 2020 - Live Online</td>
<td>Virginia Beach, VA</td>
<td>Aug 30, 2020 - Sep 04, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Japan Bi-Lingual Live Online</td>
<td>Japan</td>
<td>Aug 31, 2020 - Sep 05, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Philippines 2020</td>
<td>Manila, Philippines</td>
<td>Sep 07, 2020 - Sep 19, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS London September 2020 - Live Online</td>
<td>London, United Kingdom</td>
<td>Sep 07, 2020 - Sep 12, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS London September 2020</td>
<td>London, United Kingdom</td>
<td>Sep 07, 2020 - Sep 12, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Baltimore Fall 2020</td>
<td>Baltimore, MD</td>
<td>Sep 08, 2020 - Sep 13, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Baltimore Fall 2020 - Live Online</td>
<td>Baltimore, MD</td>
<td>Sep 08, 2020 - Sep 13, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>Threat Hunting & Incident Response Summit & Training 2020</td>
<td>Virtual - US Eastern,</td>
<td>Sep 10, 2020 - Sep 19, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Munich September 2020 - Live Online</td>
<td>Munich, Germany</td>
<td>Sep 14, 2020 - Sep 19, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Munich September 2020</td>
<td>Munich, Germany</td>
<td>Sep 14, 2020 - Sep 19, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Network Security 2020 - Live Online</td>
<td>Las Vegas, NV</td>
<td>Sep 20, 2020 - Sep 25, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Australia Spring 2020 - Live Online</td>
<td>Australia</td>
<td>Sep 21, 2020 - Oct 03, 2020</td>
<td>CyberCon</td>
</tr>
<tr>
<td>SANS Australia Spring 2020</td>
<td>Australia</td>
<td>Sep 21, 2020 - Oct 03, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Northern VA - Reston Fall 2020</td>
<td>Reston, VA</td>
<td>Sep 28, 2020 - Oct 03, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS San Antonio Fall 2020 - Live Online</td>
<td>San Antonio, TX</td>
<td>Sep 28, 2020 - Oct 03, 2020</td>
<td>CyberCon</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://digital-forensics.sans.org/community/papers/grem/giac-grem-assignment-pass_14", "len_cl100k_base": 13002, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 95270, "total-output-tokens": 13894, "length": "2e13", "weborganizer": {"__label__adult": 0.0015344619750976562, "__label__art_design": 0.0013151168823242188, "__label__crime_law": 0.06536865234375, "__label__education_jobs": 0.01126861572265625, "__label__entertainment": 0.0007219314575195312, "__label__fashion_beauty": 0.0007023811340332031, "__label__finance_business": 0.0009489059448242188, "__label__food_dining": 0.0008373260498046875, "__label__games": 0.00595855712890625, "__label__hardware": 0.0102691650390625, "__label__health": 0.001941680908203125, "__label__history": 0.0011034011840820312, "__label__home_hobbies": 0.0005779266357421875, "__label__industrial": 0.00274658203125, "__label__literature": 0.0010843276977539062, "__label__politics": 0.0016107559204101562, "__label__religion": 0.0012941360473632812, "__label__science_tech": 0.2822265625, "__label__social_life": 0.0005736351013183594, "__label__software": 0.188720703125, "__label__software_dev": 0.4169921875, "__label__sports_fitness": 0.0008993148803710938, "__label__transportation": 0.0008268356323242188, "__label__travel": 0.0003600120544433594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53140, 0.03589]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53140, 0.11545]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53140, 0.91441]], "google_gemma-3-12b-it_contains_pii": [[0, 245, false], [245, 404, null], [404, 4884, null], [4884, 5084, null], [5084, 7442, null], [7442, 9877, null], [9877, 12080, null], [12080, 14332, null], [14332, 17120, null], [17120, 17177, null], [17177, 17235, null], [17235, 17293, null], [17293, 19832, null], [19832, 21567, null], [21567, 23594, null], [23594, 26426, null], [26426, 29659, null], [29659, 33378, null], [33378, 34495, null], [34495, 37722, null], [37722, 40643, null], [40643, 43592, null], [43592, 44165, null], [44165, 47407, null], [47407, 48577, null], [48577, 53140, null]], "google_gemma-3-12b-it_is_public_document": [[0, 245, true], [245, 404, null], [404, 4884, null], [4884, 5084, null], [5084, 7442, null], [7442, 9877, null], [9877, 12080, null], [12080, 14332, null], [14332, 17120, null], [17120, 17177, null], [17177, 17235, null], [17235, 17293, null], [17293, 19832, null], [19832, 21567, null], [21567, 23594, null], [23594, 26426, null], [26426, 29659, null], [29659, 33378, null], [33378, 34495, null], [34495, 37722, null], [37722, 40643, null], [40643, 43592, null], [43592, 44165, null], [44165, 47407, null], [47407, 48577, null], [48577, 53140, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 53140, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53140, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53140, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53140, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 53140, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53140, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53140, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53140, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53140, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53140, null]], "pdf_page_numbers": [[0, 245, 1], [245, 404, 2], [404, 4884, 3], [4884, 5084, 4], [5084, 7442, 5], [7442, 9877, 6], [9877, 12080, 7], [12080, 14332, 8], [14332, 17120, 9], [17120, 17177, 10], [17177, 17235, 11], [17235, 17293, 12], [17293, 19832, 13], [19832, 21567, 14], [21567, 23594, 15], [23594, 26426, 16], [26426, 29659, 17], [29659, 33378, 18], [33378, 34495, 19], [34495, 37722, 20], [37722, 40643, 21], [40643, 43592, 22], [43592, 44165, 23], [44165, 47407, 24], [47407, 48577, 25], [48577, 53140, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53140, 0.26515]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
1e23a9415d4c61a9346ce2207ded7e2c2bc16ed2
|
End-to-End Database Software Security
Denis Ulybyshev *, Michael Rogers, Vadim Kholodilo and Bradley Northern
Department of Computer Science, Tennessee Technological University, Cookeville, TN 38505, USA
* Correspondence: dulybyshev@tntech.edu; Tel.: +1-931-372-6127
Abstract: End-to-end security is essential for relational database software. Most database management software provide data protection at the server side and in transit, but data are no longer protected once they arrive at the client software. In this paper, we present a methodology that, in addition to server-side security, protects data in transit and at rest on the application client side. Our solution enables flexible attribute-based and role-based access control, such that, for a given role or user with a given set of attributes, access can be granted to a relation, a column, or even to a particular data cell of the relation, depending on the data content. Our attribute-based access control model considers the client’s attributes, such as versions of the operating system and the web browser, as well as type of the client’s device. The solution supports decentralized data access and peer-to-peer data sharing in the form of an encrypted and digitally signed spreadsheet container that stores data retrieved by SQL queries from a database, along with data privileges. For extra security, keys for data encryption and decryption are generated on the fly. We show that our solution is successfully integrated with the PostgreSQL® database management system and enables more flexible access control for added security.
Keywords: software security; database security; access control; data privacy
1. Introduction
Database management software is widely used in private and public sectors, including government, manufacturing, public utilities, e-commerce, and other domains where storage and fast retrieval of data are desired. The relational database model is widely used due to its flexibility and scalability, and its wide application to many kinds of data. Furthermore, it is easy to understand and query and has a flexible and popular language interface called Structured Query Language (SQL).
When the first relational database management software applications came into wide use, they had very little security. As their popularity grew, so did malicious attacks on relational databases in order to steal information. Therefore, more security was integrated into relational database management software over the years, both for data in storage and data delivered to the client over the network.
Unfortunately, typically, once data arrive at the client application, data are no longer secure. They are stored in files or presented in report format without encryption or access control and can be viewed by anyone that has access to the client’s computer. Likewise, if a client shares these data by transferring them via email or other methods over the commodity Internet, they can be intercepted and read by unauthorized parties. A gap exists in current technology for providing server-enforced security after the data reach the client. In other words, the data should reach the client in a secure form that guarantees that the data remain confidential except for those that have the right to access them.
An important component of this existing gap is a lack of flexible access control both while the data are at the server and after the data reach the client. In most relational database management systems (RDBMS), access privileges can be granted to the relation (table) or to a particular column (attribute) of the table. Supporting more fine-grained access control policies by granting privileges to a particular data cell, depending on the data content...
stored in that cell, is highly desirable. For instance, a given role, e.g., “gastroenterologist”, can access the attribute “diagnosis”, if the diagnosis begins with “gas”. Our solution provides fine-grained access control and supports data integrity and secrecy after the data reach the client.
To summarize, in this paper, we propose a methodology to solve the following problems:
1. Protect database records on the relational database client side, as well as in transit.
2. Provide fine-grained role-based and attribute-based access control for database records on the client side after it leaves the server. In our access control model, for a given role or user with a given set of attributes, access can be granted to a relation, a column, or even to a particular data cell of the relation, depending on the data content.
3. Enable decentralized data access and peer-to-peer data exchanges between clients, which eliminate the necessity to contact the database server each time to request data from the database.
2. Motivation and Goals
Typical methods of security for most database management software are as follows. Most servers require users to log into the database server to provide authentication and authorization for access control. Once a database user is identified, the database server can enforce its rules that determine what data that user can access. A database administrator can allow users to access tables, views, or stored procedures for accessing the data. Furthermore, to control updates, a database management system (DBMS) can enforce integrity controls such as primary keys, foreign keys, type constraints, etc., that constrain the way users manipulate the data. For secrecy, database management software may encrypt files on some secondary storage and also encrypt query results for secure network delivery to the user.
Many database management software applications do not have the ability to define access controls at the column level. Instead, the method of controlling access to specific columns in tables for most database management systems is to disallow direct query access to the tables and define views and stored procedures. These views and stored procedures execute queries that may leave columns out of the result set for particular roles. Unfortunately, these methods are not as scalable for database management as role-based access control. For example, in a DBMS, the administrator has to create separate views or stored procedures for each separate user/role. For \( N \) different roles, \( N \) different views and/or stored procedures are needed. Moreover, changes to privileges mean that every view/stored procedure would have to be rewritten. However, if an RDBMS could support role-based access control (RBAC) at the column level, then only the access control list (ACL) would need to be changed.
Although some databases do allow column-level access control at the time of a query [1], the lack of granularity of access control is not the only issue. All of the typically supported RDBMS security measures are effective as long as the data are controlled by the server, but not after the data reach the client (user). For example, consider a scenario where a team of medical professionals needs to access the medical history report for a patient. The report is generated from an SQL query or a sequence of SQL queries. The data for that patient that are stored on the trusted server are secure, and the report is securely generated and then delivered to the head physician on a secure channel. Although the head physician should be able to read all of the report, only parts of the report may need to be read by radiologists, cardiologists, and pharmacists. However, the report should not be accessible by anyone else, and those that can access the report should only be able to read the parts for which they have authorization. In this scenario, the head physician cannot simply deliver the report to those that need to read it. If the report is delivered over an insecure network (e.g., via http), anyone sniffing the network can also see it. Additionally, the head physician cannot guarantee that the intended recipients will only read the parts for which they are authorized. This scenario is not atypical and can be applied to corporations,
academic institutions, and many other domains that have reports generated from relational databases that must be viewed by various departments or organizational units.
The goal of this paper is to describe a design and an implementation of PROtected SPReadsheet container Generator for SQL (PROSPEGQL), which is a novel add-on to relational databases that supports scenarios similar to the above. In particular, this research paper has developed and integrated a container-based solution into an RDBMS that provides the following:
- **Persistent security for SQL results.** Query results are secure even after landing on the client’s computer and being shared with other clients.
- **Fine-grained access control.** The container-based solution provides role-based and attribute-based access control so that only authorized entities are allowed to see the data. These ACLs are embedded into the container, so access control can be enforced no matter where the container is stored, or how it is transferred from one destination to another. Access control operates at the relation (table), attribute (column), and cell (data item) levels, depending on the access control needed.
- **Decentralized data access to the database.** A user does not need to contact the database server each time they need the data. Therefore, a single point of failure is eliminated, and query and retrieval cost is reduced.
- **Integration with any RDBMS with stored procedure support including databases that already have cryptographic support such as PostgreSQL®, Microsoft® SQL Server® (the paper “Persistent Security and Flexible Access Control for RDBMS” is an independent publication and is neither affiliated with, nor authorized, sponsored, or approved by Microsoft Corporation® [2]) or CryptDB [3].
- **On-the-fly encryption/decryption key generation** for scalable and secure key management.
In the access control model used for our PROSPEGQL, access privileges to a particular cell (data item) are specified with regular expressions that enable great flexibility in defining access conditions. Access control follows the principle of least privilege for column data, aggregates, and computed/sorted values. In other words, the ACLs are constructed such that the user only has the privilege for the computed values that is provided by the value for which the user/role has the least privilege (see Section 4 for a further description).
The impact of our proposed solution, which can easily be deployed at commodity RDBMS servers, is that it will protect database records on the client side after data leave the database server and supply a secure way to share information among database users in a decentralized peer-to-peer way. PROSPEGQL can be used in hospital information systems, large and small organizations, and anywhere role-based access control for relational data is necessary.
3. Related Work
To ensure data privacy on untrusted servers, a database must store the data in encrypted form. SQL queries over encrypted data must be supported, along with a fine-grained access control. Database clients might need to access the encrypted relations (tables) or separate data attributes (columns).
A PostgreSQL® RDBMS enables different encryption options [4] to protect data in transit and at rest. PostgreSQL® supports encrypting a specific attribute in a table or encrypting the entire data partition. It also supports client-side encryption, which can be used when a database administrator is untrusted. However, the key distribution problem arises if the data owner that has encrypted the data with a key wants multiple entities to access data subsets.
The approach in [5] implemented a trust model that allowed operations in a decentralized setup but did not address access control. In contrast, our approach provides data protection in transit and at rest on a client’s side and enables a fine-grained access control. This allows different authorized roles or users to access a table, a separate column, or a separate data item; these data have been encrypted by a data owner. After SQL query results land on a client’s side computer, the client can share the data in the form of a
PROtected SPReadsheet Container for DataBases (PROSPECDB) with the other parties without the necessity to communicate with a database server.
The CryptDB [3] database engine stores data on the database server in encrypted form and protects the data from untrusted database administrators. CryptDB supports a subset of SQL queries to work on encrypted data. When a client issues an SQL query, data decryption takes place on a trusted proxy server and then decrypted SQL results are sent to a client. The database server does not have access to the encryption and decryption keys [3]. If the server is compromised, then only ciphertext is revealed and data leakage is limited to data for users who are currently logged in the database. In the PROSPEGQL solution, encryption and decryption keys are not stored on either the client’s or server’s side and not on any trusted third party. The encryption keys are generated on the fly when PROSPECDB is generated. Decryption keys are generated on the fly when data in the PROSPECDB are viewed by an authorized client [6].
For supporting inequality and range queries, CryptDB supports order-preserving encryption (OPE), which is prone to frequency analysis attacks, and deterministic encryption (DET). In 2015, Naveed et al. demonstrated in [7] successful attacks on CryptDB to recover the plaintext from database columns, encrypted with DET and OPE encryption schemes. Raluca Popa in [8] presented the guidelines on how to use their CryptDB system to prevent sensitive data leakage. The DET scheme provides strong encryption guarantees only if there are no data repetitions in DET-encrypted attributes and every value is unique. OPE should not be used for columns with sensitive data. One of the solutions to replace OPE could be using a fully homomorphic encryption, but it imposes a very significant performance overhead. As an alternative, partially homomorphic encryption can be employed. A significant difference in our implementation of PROSPEGQL and CryptDB’s implementation of sensitive and nonsensitive columns is that PROSPEGQL protects from inference attacks for which the OPE encryption scheme is vulnerable.
PROSPEGQL derives ACLs such that a user/role only has access to a particular column in a query result set according to the column used in any aggregate, function, expression, or ordering that has the least privilege (see Section 4 for further explanation). This scheme prevents access to computed columns and sorted results if the client does not have access rights to the columns involved in the computation/sorting.
In a PROSPECD data container, presented in [6], the smallest granularity unit for access control is a data worksheet (data subset in the spreadsheet file). In this paper, the containerization solution is extended to the PROSPECDB container to support access control not only for separate data worksheets but also for separate data columns inside the worksheets and separate data cells depending on the data content, using Perl® regular expressions [9]. It allows us to grant or deny access for a given database user or role. This access is delineated to a separate data item based on the content stored in that data item. For this paper, we created PROSPEGQL by integrating the PROSPECDB container with a PostgreSQL® RDBMS.
The conceptual difference between the PROSPECDB container and an active bundle [10–13] is that an active bundle incorporates data, metadata, and a policy enforcement engine (virtual machine), whereas PROSPECDB only stores data and metadata, without a policy enforcement engine. Furthermore, in contrast with active-bundle and P2D2-based [14] solutions, PROSPECDB detects several types of data leakages that can be made by malicious insiders to unauthorized entities [6]. Moreover, in PROSPECDB, the access control policies can be specified in the form of Perl® regular expressions that decide based on the data content whether the access privilege can be granted.
A solution by Tun and Mya in [15] encrypted the selected cells in a spreadsheet file and embedded the hash value of the content. In PROSPEGQL, each separate worksheet is encrypted with a separate symmetric key, generated on the fly, for fine-grained access control. Furthermore, access can be granted to a separate worksheet, attribute (column), or data cell depending on data content.
Almarwani et al. [16] proposed a solution that supported queries over encrypted data and a fine-grained access control. The static model was based on ciphertext-policy attribute-based encryption and CryptDB [3]. Their dynamic model was a combination of CryptDB and PIRRATE [17] that was built on attribute-based encryption and supported revoking access from users via a proxy [17]. Encrypted files could be shared in a social network and decrypted by multiple users on their side, using the proxy key. PROSPECDB differs in that it stores access control policies in encrypted form together with the encrypted data, as embedded worksheets. The PROSPECDB container is generated on a trusted server as an encrypted and digitally signed spreadsheet file, where each data worksheet stores the results of an SQL query which is encrypted with its own key, generated on the fly.
4. Core Design
The core design of PROSPEGQL integrates the RDBMS with a PROSPECDB container generator and a viewer, as shown in Figure 1. The client submits an SQL query through the query interface, which is processed by the integration components so that the PROSPECDB container can be generated. Finally, the container can be downloaded by an authorized client and viewed by the clients with appropriate access permissions. The following subsections describe the data flow of the system in details, followed by the design of the PROSPECDB container and an access control model. We successfully integrated our PROSPECDB container with a PostgreSQL® RDBMS using stored procedures. Furthermore, the deployment can be easily ported over to Oracle® SQL or other RDBMS products with cryptographic functionalities.
Figure 1. PROSPEGQL Workflow.
4.1. Data Flow Design
As shown in Figure 1, a client submits a query through a commodity user interface such as a terminal emulator or graphical SQL client, or via a database API of a programming language. The query constructed must call one of two PROSPEGQL functions, which are `get_container()` or `get_container_url()`, and pass the SQL query to that function as an argument. An example of a PROSPEGQL query can be found in Listing 1.
Listing 1. Example of PROSPEGQL get_container Query
```sql
SELECT get_container(
'SELECT * FROM Patient NATURAL JOIN Billing_Info
WHERE Patient.id = ''PB0023S''')
```
The above query returns a PROSPECDB with the SQL query results and access control privileges in encrypted form, as a binary large object (BLOB). The client can then view the BLOB using the data viewer interface, store it for later use, or even send it over the network to other parties to view and be confident that the data in the container are protected from unauthorized accesses. The get_container_url() function is the same as get_container() except that instead of returning the PROSPECDB as a BLOB to the client, it stores the PROSPECDB in the repository and returns a uniform resource loader (URL). The utility of get_container_url() is that the PROSPECDB is immediately available for viewing for authorized parties via the secure web-based viewer without having to download it. To generate and view the PROSPECDB data, the PROSPEGQL functions accomplish the following steps, shown in Figure 1:
1. The client issues the SQL query to the database.
2. PROSPEGQL parses the SQL query argument to determine its database objects, which include columns used in its SELECT clause and tables in its FROM clause.
3. The function generates an access control list (ACL) by querying the database server for database privileges for the discovered columns and tables.
4. PROSPEGQL then evaluates and executes the SQL query on the database server to obtain a result set.
5. PROSPEGQL passes the result set and an ACL to the PROSPECDB container generator, which then generates the container.
6. PROSPEGQL then stores the resulting container in the PROSPECDB repository or, if the function is get_container() instead of get_container_url(), it passes the container back to the caller as a BLOB.
7. An authorized client can view PROSPECDB data in a Microsoft® Excel® Add-in, a standalone application, or in a web viewer. The authorized client communicates with the authentication server (AS) to derive decryption keys for accessible PROSPECDB data subsets. Details are described in Section 4.2.
The construction of the ACLs in step 3 is accomplished in such a way as to reduce the threat of inference attacks. For example, a client submits the following query in Listing 2.
Listing 2. Example Query for Reduction of Inference Attacks
```sql
SELECT base_treatment_rate * sales_tax_rate
FROM Billing_Info
```
Consider a particular role that has read privileges for sales_tax_rate but not for base_treatment_rate. Step 3 would create an ACL for the column in the “Results” sheet for the expression “base_treatment_rate * sales_tax_rate”. The privilege for that particular role would be the same as that role’s privilege for the base_treatment_rate column in the database. In other words, the role would not be able to read the “base_treatment_rate * sales_tax_rate” column in the result set. Furthermore, consider the query in Listing 3.
Listing 3. Example Query for Demonstrating Roles
```sql
SELECT name, base_treatment_rate
FROM Billing_Info
ORDER BY base_treatment_rate
```
Again, consider that a particular role does not have read access to base_treatment_rate. A member of that role hopes to infer the values of the database column from the column in the result set according to its sorted order. However, step 3 will consider an ORDER BY clause to be an expression over the entire query (i.e., all the result set’s columns). Therefore, the ACLs for the columns in the query result set will be constructed such that each will have the lowest privilege level of all the columns used in its expression, including the columns in the ORDER BY clause. In this case, the role will not have the privileges to read any of the columns in the result set, and thus members of that role will be able to infer nothing.
4.2. PROSPECDB Data Container
A PROtected SPrEadsheet Container for DataBases (PROSPECDB) is a spreadsheet file that stores data subsets as separate encrypted data worksheets, along with an encrypted “Metadata” worksheet. Data worksheets are encrypted with separate symmetric keys that are generated on the fly. A “Metadata” worksheet contains access control policies encrypted with a separate predefined key. The on-the-fly key generation procedure takes the following inputs:
1. The hash value of the authentication server’s (AS) private key. A data viewer sends an https POST request with the attached X.509 certificate of the client to the AS, which verifies the client’s identity. The AS can only send the hash value of its private key to authorized clients, based on their roles. This hash value can be cached locally. The AS manages access revocations.
2. The hash value of metadata, which contains access control policies. Including this component in the symmetric key generation protects the PROSPECDB from unauthorized modifications of metadata.
3. The hash value of the worksheet’s name, i.e., the data subset name.
The SHA-256 hash function is used. The on-the-fly symmetric key generation procedure is the same as that used in a PROSPECD container [6]. For encryption in the PROSPECDB, the “CryptoJS” library written in native JavaScript [18] is used. The new PROSPECDB feature introduced in this paper compared to PROSPECD is the access that can be granted to a separate attribute (data column in a worksheet) or a separate data cell (data item), depending on the content of this data item. The PROSPECDB is mapped to the relational model: columns in the container are attributes in the relational model. PROSPEGQL technology integrates the PROSPECDB into an RDBMS by incorporating both database ACLs and SQL query results into the encrypted worksheets. For instance, an electronic health record of a patient can be created as a result of querying multiple tables that contain clinical and administrative information.
Data, encrypted in a PROSPECDB, can be accessed and viewed in one of these three options:
1. A cross-platform application installed on the client’s side.
2. A Microsoft® Excel® Add-in, installed on the client’s side.
3. A remote web viewer that runs on the same node as the PROSPECDB repository—see Figure 1.
To view the data, a client needs to enter credentials in the viewer. Based on the entered credentials, the client’s role is determined and data subsets available for this role are decrypted and displayed. The viewer enforces the access control policies, either on the client’s side or on a remote trusted server, depending on which of the above three options the client selected to view the PROSPECDB data.
4.2.1. Adversary Model
PROSPEGQL, integrated with PostgreSQL®, protects from the following type of adversaries:
1. A malicious database administrator who tries to view data on the database server. To protect data, a client must encrypt sensitive database table(s) or separate columns with their own encryption key using the native encryption support of the database server. PostgreSQL® supports several encryption modes [4]. Decryption keys are not stored on the server side. Our PROSPEGQL solution is RDBMS-agnostic and instead of PostgreSQL®, other RDBMS supporting the client-side encryption can be used.
2. A client tries to gain access to a PROSPECDB data subset for which the client is not authorized. Because the hash value of the “Metadata” worksheet that includes ACLs (see Table 1) is one of the inputs for the decryption key generation [6], a modification of the access control policies would lead to the wrong Advanced Encryption Standard (AES) [19] decryption key generation. Inaccessible data worksheets are encrypted.
with the AES protocol, using the cipher block chaining (CBC) mode and a 256-bit key, and hidden from unauthorized clients in the viewer or Microsoft® Excel® Add-in. Breaking the 256-bit AES encryption scheme is computationally infeasible. Based on our assumptions, listed below, software to decrypt and view PROSPECDB containers (PROSPECDB viewer) is trusted. PROSPECDB viewer software is digitally signed by trusted authority to guarantee its integrity and authenticity.
3. An adversary has access to the client’s computer but does not know the client’s credentials for PROSPECDB, and who tries to steal the client’s data from PROSPECDB. Similar to the item above, because the PROSPECDB spreadsheet file is encrypted, breaking the 256-bit AES encryption is computationally infeasible.
4. An adversary tries to steal the data, sent by a client to another user, while data are in transit. Depending on the use case, sending data in plaintext might be a violation of known policies and regulations, for example, the Health Insurance Portability and Accountability Act (HIPAA) in the healthcare domain or the Family Educational Rights and Privacy Act (FERPA) in the education domain. Thus, the client should transfer data to another user only in a protected form, such as a PROSPECDB. Since the PROSPECDB file is encrypted, even if the data communication channel is unprotected, an attacker will not be able to access the data without breaking the 256-bit AES key, which is computationally infeasible.
5. SQL Inference attack [20]. An authorized and malicious client tries to determine data they do not have a privilege to access by constructing an SQL expression that involves the column for which the client is not authorized. However, the client will not have the privileges to access the column and use it in any expression because of the way that the ACLs are constructed as described in Section 4.1 step 3.
Table 1. Access Control List with Regular Expressions.
<table>
<thead>
<tr>
<th>Columns</th>
<th>Admin</th>
<th>Doctor</th>
<th>Insurance</th>
<th>Analyst</th>
</tr>
</thead>
<tbody>
<tr>
<td>MedicalInfo.ID</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>MedicalInfo.Date-of-Visit</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>MedicalInfo.Doctor’s ID</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>MedicalInfo.Diagnosis</td>
<td>1</td>
<td>1</td>
<td>gas.*</td>
<td></td>
</tr>
<tr>
<td>MedicalInfo.Prescription</td>
<td>1</td>
<td>1</td>
<td>Sulfa.*</td>
<td>1</td>
</tr>
<tr>
<td>MedicalInfo.Blood Pressure</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
4.2.2. Assumptions
In order to thwart the above adversaries, we designed PROSPEGQL with the following assumptions in mind:
1. Hardware and an operating system on a database server and on a client’s side are trusted.
2. As long as a database server is trusted and encrypts relations or separate relation attributes (columns) stored in the server, then PROSPEGQL does not need to trust a database administrator.
3. The PROSPECDB viewer is trusted. It does not leak decryption keys and displays only the decrypted worksheets for which the client is authorized.
4.2.3. Security Analysis
In the probabilistic model of Goldwasser and Micali, “extracting any information about the cleartext from the ciphertext is hard on the average for an adversary with polynomially bounded computational resources” [21]. An adversary should not distinguish between the ciphertexts obtained from two plaintexts M₀ and M₁. Using the indistinguishability under chosen plaintext attack (IND-CPA) experiment, similarly to [22], and the concrete approach to define negligibility [23], we can show that the probability for an adversary to succeed in breaking the encryption scheme, used in PROSPECDB, is negligible. “A scheme is (t, c)-secure if every adversary running for time at most t succeeds in breaking the scheme with
probability at most $\epsilon$” [23]. AES with a 256-bit key may be expected to be $(t, t/2^n)$-secure as $2^{64}$ seconds are more than 584 billion years. If $n = 256$, $t = 2^{64}$, then $\epsilon = 2^{-192}$. A probability of $\epsilon = 2^{-192}$ for an adversary to break the encryption is computationally infeasible.
4.3. Access Control Design
Our access control design consists of two major components: role-based access control and attribute-based access control.
4.3.1. Role-Based Access Control
The PROSPECDB stores an ACL in the “Metadata” worksheet that is encrypted with a symmetric AES key [6]. As shown in Table 1, the ACL defines access for roles to a given column or a given data cell. These data privileges are retrieved from the database catalog. For the current implementation of PROSPECQL, a privilege can be read or none, represented as 1 and 0 respectively in Table 1. Further granularity for read access is specified using Perl® regular expressions by extending the database catalog with extra tables understood by PROSPECQL. For example, the “Sulfa.*” record means that the role “insurance” can view a “Prescription” column of a “MedicalInfo” data worksheet if the data string in that column starts with “Sulfa”. Thus, our model supports data access based on the data content. This extends the access control capability provided by traditional RDBMS, such as PostgreSQL®, Microsoft® SQL Server®, etc.
As explained in step (3) in Section 4.1, ACLs are generated by querying the database catalog for privileges for columns and tables identified in the SQL query. Then, these ACLs are encoded in the “Metadata” worksheet of the PROSPECDB container, as described in step (5) in Section 4.1. ACLs reference the column indices in the “Results” worksheet, not the SQL column names. Such an encoding supports SQL expressions, such as mathematical expressions, and supports access control based on data content via regular expressions. For SQL expressions on columns, the principle of least privilege is observed. In other words, the ACL is built such that, for a particular user/role, the minimum access privilege is encoded in the ACL. Likewise, the principle of least privilege is also used when building the ACLs for columns that have access rules based on data content via regular expressions. If an access rule based on data content is defined for a column used in the SQL, then the ACLs are created with the least privilege for that user/role. As described in the related work found in Section 3, this scheme protects against certain inference attacks, including attacks on OPE.
In all data access models, symmetric keys to encrypt and decrypt data are generated on the fly for a given user as defined by the role, based on three inputs, discussed in Section 4.2 and in [6].
4.3.2. Attribute-Based Access Control
When a client service requests data from a PROSPECDB, the following client’s attributes are evaluated: the type and version of the operating system and web browser (for data access in the web viewer), the type of the device, and an authentication method [24]. A user-agent string is retrieved from the user’s client. Each user-agent type is stored in an array, which points to an attribute (bucket) in a hash table “Attribute”, as illustrated in Figure 2. This hash table stores all possible names for a given attribute, such as names of web browsers or operating systems. Once the attribute name is found in the “Attribute” hash table, a second hash table, which stores version numbers and access rankings, associated with each version, is queried. To evaluate the total access ranking for the user, each attribute is evaluated by its importance and intrinsic security. The end goal is to restrict access for clients with insecure attributes. To decrease the amount of time used in searching for each attribute, we employ the use of a multithreaded model.
5. Evaluation
In this section, we evaluated the performance of our PROSPEGQL solution. The system configuration for our experiment was as follows:
- CPU: Intel® Core™ i5-8250U @ 1.7 GHz; RAM: 8 GB DDR4
- OS: Microsoft® Windows® 10 Pro, 64 bit
- RDBMS: PostgreSQL®, 11
Firstly, we measured time to query the PostgreSQL® database and generate the PROSPECDB that included access control policies and query results, according to steps 1–5 discussed in Section 4.1. The SQL query used in this test can be seen in Listing 4.
Listing 4. Test Query to Get PROSPECDB Container with Billing_info
|| `SELECT get_container ( 'SELECT * FROM Billing_Info' )` |
As shown in Table 2, PROSPEGQL was considerably slower than PostgreSQL® for queries with no encryption. Fortunately, the encryption was incurred only once during the PROSPECDB generation for data that needed to be viewed by many entities without the necessity to query a database again. Next, we wanted to determine if the performance hit was because of the overhead of PROSPEGQL. Therefore, we measured the component times for PROSPEGQL. In particular, we compared the total data retrieval time from the PROSPECDB with the decryption time for data in the PROSPECDB. As seen in Table 3, columns 3 and 4, almost all the overhead was in the decryption that was imposed when viewing the PROSPECDB. Data decryption contributed from 73.3% up to 99.9%, depending on data size, to the data retrieval time.
Table 2. PROSPEGQL Generation Time.
<table>
<thead>
<tr>
<th>Data Size (Kbytes)</th>
<th>Plaintext Data Retrieval from PostgreSQL®, (ms)</th>
<th>Total PROSPEGQL Generation Time, (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.5</td>
<td>17</td>
<td>1600</td>
</tr>
<tr>
<td>2</td>
<td>22</td>
<td>1686</td>
</tr>
<tr>
<td>8</td>
<td>46</td>
<td>1888</td>
</tr>
<tr>
<td>32</td>
<td>368</td>
<td>2296</td>
</tr>
<tr>
<td>128</td>
<td>426</td>
<td>24,654</td>
</tr>
<tr>
<td>512</td>
<td>1594</td>
<td>31,836</td>
</tr>
<tr>
<td>2048</td>
<td>6406</td>
<td>41,687</td>
</tr>
</tbody>
</table>
Table 3. Data Retrieval Time: Encrypted PostgreSQL® vs. PROSPECDB Decryption-only vs. PROSPECDB vs. Encrypted JSON.
<table>
<thead>
<tr>
<th>Data Size (Kbytes)</th>
<th>Encrypted PostgreSQL® Data Retrieval Time, (ms)</th>
<th>PROSPECDB Data Decryption Time, (ms)</th>
<th>PROSPECDB Data Retrieval Total Time, (ms)</th>
<th>Encrypted JSON Data Retrieval Time, (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.5</td>
<td>12</td>
<td>120</td>
<td>152</td>
<td>40</td>
</tr>
<tr>
<td>2</td>
<td>22</td>
<td>135</td>
<td>160</td>
<td>53</td>
</tr>
<tr>
<td>8</td>
<td>43</td>
<td>210</td>
<td>247</td>
<td>113</td>
</tr>
<tr>
<td>32</td>
<td>106</td>
<td>520</td>
<td>614</td>
<td>246</td>
</tr>
<tr>
<td>128</td>
<td>395</td>
<td>8592</td>
<td>8893</td>
<td>946</td>
</tr>
<tr>
<td>512</td>
<td>1808</td>
<td>13,820</td>
<td>13,837</td>
<td>3780</td>
</tr>
<tr>
<td>2048</td>
<td>25,861</td>
<td>16,525</td>
<td>16,594</td>
<td>10,842</td>
</tr>
</tbody>
</table>
A client (web service) communicates with PROSPECDB container via the http protocol. To send an https GET request to the PROSPECDB for data retrieval, we used ApacheBench® (we do not claim association or endorsement of/for/by the Apache Software Foundation (ASF) [25]), version 2.3. Similarly to [6], the PROSPECDB data retrieval time started with the https GET request, sent by ApacheBench® to the PROSPECDB, and ended with the response reception. The retrieval time included times spent for authentication, access control policies evaluation, data decryption, and retrieval [6]. Data retrieval times were measured as an average of 100 data requests. The client (web service) ran on the same computer with a PostgreSQL® database, PROSPECDB, and encrypted JavaScript Object Notation (JSON) file, to exclude network delays from the time measurements. We compared the SQL query execution and retrieval times for encrypted data columns in PostgreSQL® with PROSPECDB data retrieval times, as seen in Table 3, columns 2 and 4. The following SQL query was used for decrypting and pulling data from encrypted data columns in PostgreSQL®. In the query, as shown in Listing 5, we used eight columns, with 64 bytes of data in each of them. We varied the number of encrypted records in the PostgreSQL® table to get the required data size for the experiment.
Listing 5. Example Query to Decrypt Columns in PostgreSQL®
```sql
SELECT encode(decrypt(<column1_name>::bytea, 'decrypt_key','aes_cbc'), 'escape'),
<column2_name>::bytea, 'decrypt_key','aes_cbc'), 'escape'),
<columnN_name>::bytea, 'decrypt_key','aes_cbc'), 'escape')
FROM <table_name>
```
Table 3 (columns 2 and 4) shows that the performance of PostgreSQL® with encryption and PROSPECDB followed a similar trend, and it corroborated our conclusion that encryption/decryption was the bulk of the PROSPECDB generation overhead. Therefore, the overhead of securing the data from the database depended upon the implementation of the encryption algorithm, and both PROSPECQL and PostgreSQL® make use of AES. Note that the piece of hardware we ran our tests on did not support the AES-NI instructions. PROSPECDB containers support policy enforcement and data decryption on the client side, eliminating a single point of failure compared to server-based policy enforcement and decryption. For this reason, we did not assume that the client’s hardware had AES-NI support. The PROSPECDB protects data on the client’s side and distributes the load for decryption and policy management. Furthermore, the container relies on the on-the-fly key generation scheme, which adds an extra security layer to protect data [6]. Therefore, we believe the advantages of using PROSPECQL are justified. Furthermore, retrieving
2048 Kbytes from PROSPECDB was 36% faster than from the encrypted PostgreSQL \textsuperscript{®} table. For other data sizes, PostgreSQL \textsuperscript{®} was 5.74 to 22.51 times faster.
Finally, we measured the data retrieval times for an encrypted JSON file and compared them with the PROSPECDB data retrieval times. We chose JSON for a functional reason since it is a universal data format. However, it ended up performing better than the spreadsheet files in the PROSPECDB, as it can be seen in Table 3, columns 4 and 5. We believe that the reason for the speed increase was because even though the sizes of the retrieved data in our experiments were the same, the JSON files were smaller than the spreadsheet files because of the metadata that Microsoft \textsuperscript{®} Excel \textsuperscript{®} includes. Furthermore, JSON files do not use decompression. Depending on the data sizes, JSON was faster than the PROSPECDB by 1.53 to 9.4 times.
To improve the performance of encryption and decryption operations for the PROSPECDB, we are working on a microservice-based implementation and investigating different cryptographic libraries. We are also investigating a C++ version of PROSPEGQL, originally implemented in JavaScript and Python.
6. Conclusions
Our approach improved database security by providing data protection in transit and at rest on the client’s side. The developed solution was integrated with a PostgreSQL \textsuperscript{®} RDBMS and extended its access control model by supporting role-based and attribute-based access control for separate data columns and data items in the relations, depending on data content. The methodology enabled a decentralized data access and peer-to-peer data exchanges between clients, which eliminated the necessity to contact the database server each time to request data from the database. Data encryption relied on an on-the-fly key generation, which made the scheme more secure since the key was not stored on the database server or inside the PROSPECDB data or on the client’s side. The added functionality did add an extra overhead, compared to PostgreSQL \textsuperscript{®}, depending on the data size. It was mitigated by the fact that a client had to run the SQL query only once to receive the data results that needed to be viewed by many entities.
7. Future Work
We plan to increase the PROSPEGQL feature support of PostgreSQL \textsuperscript{®} to support the full range of Postgres \textsuperscript{®} query capabilities. We also plan to support other container formats, such as Extensible Markup Language (XML), for easier integration into existing software. Additionally, we plan on optimizing PROSPEGQL by using in-memory and in-process operations instead of passing the data to and from the PROSPECDB container as files. Likewise, we will more tightly integrate the container functions into the source of the DBMS instead of the interpreted stored procedure language in which it is now written.
To address scaling for big data, we are working on a streaming solution and investigating using streaming JSON as our transfer protocol. A secure data container will be created on the client side and data from cache memory will be appended to it. Encrypted data from the database on the server side will be transferred to the client side and stored in memory, using Redis \textsuperscript{®}, an open-source in-memory data store that can be used as a database cache [26]. All cached data will always remain encrypted to prevent data access if the system is attacked using one of the memory attacks, such as a cold-boot attack [27].
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: For evaluating our methodology, we used synthetic (artificially created) data to populate database tables. We did not use any publicly archived datasets or restricted datasets.
Conflicts of Interest: The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
- SQL: Structured Query Language
- DBMS: Database management system
- RDBMS: Relational database management system
- RBAC: Role-based access control
- ACL: Access control list
- PROSPEQL: PROtected SPrEadsheet Container Generator for SQL
- PROSPECDB: PROtected SPrEadsheet Container for DataBases
- URL: Uniform resource loader
- AS: Authentication server
- AES: Advanced Encryption Standard
- CBC: Cipher block chaining
- HIPAA: Health Insurance Portability and Accountability Act
- FERPA: Family Educational Rights and Privacy Act
- IND-CPA: Indistinguishability under chosen plaintext attack
References
Disclaimer/Publisher's Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
|
{"Source-Url": "https://mdpi-res.com/d_attachment/software/software-02-00007/article_deploy/software-02-00007.pdf?version=1680075425", "len_cl100k_base": 9783, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 35572, "total-output-tokens": 12111, "length": "2e13", "weborganizer": {"__label__adult": 0.0004811286926269531, "__label__art_design": 0.0005025863647460938, "__label__crime_law": 0.001934051513671875, "__label__education_jobs": 0.0014734268188476562, "__label__entertainment": 0.0001003742218017578, "__label__fashion_beauty": 0.00020599365234375, "__label__finance_business": 0.0009369850158691406, "__label__food_dining": 0.0004248619079589844, "__label__games": 0.0009379386901855468, "__label__hardware": 0.0016069412231445312, "__label__health": 0.0015430450439453125, "__label__history": 0.000324249267578125, "__label__home_hobbies": 0.0001468658447265625, "__label__industrial": 0.0005917549133300781, "__label__literature": 0.0002818107604980469, "__label__politics": 0.000339508056640625, "__label__religion": 0.0003955364227294922, "__label__science_tech": 0.1591796875, "__label__social_life": 0.00013184547424316406, "__label__software": 0.0855712890625, "__label__software_dev": 0.7421875, "__label__sports_fitness": 0.0002899169921875, "__label__transportation": 0.00043892860412597656, "__label__travel": 0.00020515918731689453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51832, 0.0284]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51832, 0.40371]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51832, 0.86725]], "google_gemma-3-12b-it_contains_pii": [[0, 3755, false], [3755, 8059, null], [8059, 12246, null], [12246, 16608, null], [16608, 18944, null], [18944, 22651, null], [22651, 26442, null], [26442, 30240, null], [30240, 34141, null], [34141, 36616, null], [36616, 40980, null], [40980, 45257, null], [45257, 49276, null], [49276, 51832, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3755, true], [3755, 8059, null], [8059, 12246, null], [12246, 16608, null], [16608, 18944, null], [18944, 22651, null], [22651, 26442, null], [26442, 30240, null], [30240, 34141, null], [34141, 36616, null], [36616, 40980, null], [40980, 45257, null], [45257, 49276, null], [49276, 51832, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51832, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51832, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51832, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51832, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51832, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51832, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51832, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51832, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51832, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51832, null]], "pdf_page_numbers": [[0, 3755, 1], [3755, 8059, 2], [8059, 12246, 3], [12246, 16608, 4], [16608, 18944, 5], [18944, 22651, 6], [22651, 26442, 7], [26442, 30240, 8], [30240, 34141, 9], [34141, 36616, 10], [36616, 40980, 11], [40980, 45257, 12], [45257, 49276, 13], [49276, 51832, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51832, 0.12558]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
b36c85fac53fffb049c3a8e1b7bb4d6e8a9d74d1
|
ESP: A Language for Programmable Devices
Sanjeev Kumar, Yitzhak Mandelbaum, Xiang Yu, and Kai Li
Princeton University
{skumar,yitzhakm,xyu,li}@cs.princeton.edu
ABSTRACT
This paper presents the design and implementation of Event-driven State-machines Programming (ESP)—a language for programmable devices. In traditional languages, like C, using event-driven state-machines forces a tradeoff that requires giving up ease of development and reliability to achieve high performance. ESP is designed to provide all of these three properties simultaneously.
ESP provides a comprehensive set of features to support development of compact and modular programs. The ESP compiler compiles the programs into two targets—a C file that can be used to generate efficient firmware for the device, and a specification that can be used by a verifier like SPIN to extensively test the firmware.
As a case study, we reimplemented VMMC firmware that runs on Myrinet network interface cards using ESP. We found that ESP simplifies the task of programming with event-driven state machines. It required an order of magnitude fewer lines of code than the previous implementation. We also found that model-checking verifiers like SPIN can be used to effectively debug the firmware. Finally, our measurements indicate that the performance overhead of using ESP is relatively small.
1. INTRODUCTION
Concurrency is a convenient way of structuring firmware for programmable devices. These devices tend to have limited CPU and memory resources and have to deliver high performance. For these systems, the low overhead of event-driven state machines often makes them the only choice for expressing concurrency. Their low overhead is achieved by supporting only the bare minimum functionality needed to write these programs. However, this makes an already difficult task of writing reliable concurrent programs even more challenging. The result is hard-to-read code with hard-to-find bugs resulting from race conditions.
The VMMC firmware [10] for Myrinet network interface cards was implemented using event-driven state machines in C. Our experience was that while good performance could be achieved with this approach, the source code was hard to maintain and debug. The implementation involved around 15600 lines of C code. Even after several years of debugging, race conditions cause the system to crash occasionally.
ESP was designed to meet the following goals. First, the language should provide constructs to write concise modular programs. Second, the language should permit the use of software verification tools like SPIN [14] so that the concurrent programs can be tested thoroughly. Finally, the language should permit aggressive compile time optimizations to provide low overhead.
ESP has a number of language features that allow development of fast and reliable concurrent programs. Concurrent programs are expressed concisely using processes and channels. In addition, pattern matching on channels allows an object to be dispatched transparently to multiple processes. A flexible external interface allows ESP code to interact seamlessly with C code. Finally, a novel memory management scheme allows an efficient and verifiably safe management of dynamic data.
We reimplemented the VMMC firmware on Myrinet network interface cards using ESP. We found that the firmware can be programmed with significantly fewer lines of code. In addition, since the C code is used only to perform simple operations, all the complexity is localized to a small portion of the code (about 300 lines in our implementation). This is a significant improvement over the earlier implementation where the complex interactions were scattered over the entire C code (15600 lines).
The SPIN verifier was used to develop and extensively test the VMMC firmware. Since, developing code on the network card often slow and painstaking, parts of the system were developed and debugged entirely using the SPIN simulator. SPIN was also used to exhaustively verify the memory safety of the firmware. Once the properties to be checked by the verifier are specified, they can be rechecked with little effort as the system evolves.
The ESP compiler generates efficient firmware. We used microbenchmarks to measure the worst-case performance overhead in the firmware. Based on earlier application studies [17, 5], we expect the impact of the extra performance overhead to be relatively small.
The rest of the paper is organized as follows. Section 2 presents the motivation for a new language. Section 3 describes our three goals and our approach. The next three sections (Sections 4, 5 & 6) describe how ESP meets each
of the three goals. Section 4 provides the design of the ESP language. Section 5 describes how the SPIN model-checking verifier can be used to develop and test ESP programs. Section 6 shows how the ESP compiler generates efficient code and presents some performance measurements. Section 7 describes the related work. Finally, Section 8 presents our conclusions.
2. MOTIVATION
Devices like network cards and hard disks often include a programmable processor and memory (Figure 1). This allows the devices to provide sophisticated features in firmware. For instance, disk can support aggressive disk head scheduling algorithms in firmware.

**Figure 1: Programmable Devices**
The firmware for programmable devices is often programmed using concurrency. Concurrent programs have multiple threads of control that coordinate with each other to perform a single task. The multiple threads of control provide a convenient way of keeping track of multiple contexts in the firmware. In these situations, concurrency is way of structuring a program that runs on a single processor.
Concurrent programs can be written using a variety of constructs like user-level threads or event-driven state machines. They differ in the amount of functionality provided and the overhead involved. However, the programmable devices tend to have fairly limited CPU and memory resources. Since these systems need to deliver high performance, the low overhead of event-driven state machines make them the only choice.
In this paper, we describe a language called ESP that can be used to implement firmware for programmable devices. We were motivated by our experience with implementing the VMMC firmware. We use VMMC firmware as a case study to evaluate the ESP language. In this section, we start with a description of VMMC. Then we examine the problems with event-driven state machines programming in traditional languages like C and motivate the need for a new language for writing firmware for programmable devices.
2.1 Case Study: VMMC Firmware
The VMMC architecture delivers high-performance on Gigabit networks by using sophisticated network cards. It allows data to be directly sent to and from the application memory (thereby avoiding memory copies) without involving the operating system (thereby avoiding system call overhead). The operating system is usually involved only during connection setup and disconnect.
The current VMMC implementation uses the Myrinet Network Interface Cards. The card has a programmable 33-MHz LANai4.1 processor, 1-Mbyte SRAM memory and 3 DMAs to transfer data—to and from the host memory; to send data out onto the network; and to receive data from the network. The card has a number of control registers including a status register that needs to be polled to check for data arrival, watchdog timers and DMA status.

**Figure 2: VMMC Software Architecture** The shaded regions are the VMMC components.
The VMMC software (Figure 2) has 3 components: a library that links to the application; a device driver that is used mainly during connection setup and disconnect; and firmware that runs on the network card. Most of the software complexity is concentrated in the firmware code which was implemented using event-driven state machines in C. The entire system was developed over several years and most of the bugs encountered were located in the firmware. Our goal is to replace the firmware using the ESP language.
2.2 Implementing Firmware in C
Event-driven state machines provide the bare minimum functionality necessary to write concurrent programs—the ability to block in a particular state and to be woken up when a particular event occurs. This makes them fairly difficult to program with. We illustrate event-driven state machines programming in C with an example. The C code fragment is presented in Appendix A and is illustrated in Figure 3.
A program consists of multiple state-machines. For each state in a state machine, a handler is provided for every event that is expected while in that state. When an event occurs, the corresponding handler is invoked. The handler processes the event, transitions to a different state and blocks by returning from the handler. All the state machines share a single stack.
There are several problems with this approach. First, the code becomes very hard to read because the code gets fragmented across several handlers.
Second, since the stack is shared, all the values that are needed later have to be saved explicitly in global variables before a handler blocks. So data is passed between handlers through global variables (e.g. pAddr, sendData). In addition, global variables are also used by state machines to communicate with each other (e.g. reqSM2). It is very
Figure 3: Programming in C. The code is presented in Appendix A. A state machine is specified using a set of handlers. For each state in a state machine, a list of (event, handler) pairs has to be provided. When an event occurs, the corresponding handler is invoked. A handler is a C function takes no arguments and returns void.
Finally, hand-optimized fast paths are often built into the system to speed up certain requests. These fast paths rely on global information like the state of the various state machines and their data structures and violate every abstraction boundary. For instance, in VMMC firmware, a particular fast path is taken if the network DMA is free and no other request is currently being processed (this requires looking at the state of multiple DMAs). In addition, the fast path code updates global variables used for retransmission and might have to update the state of several state machines. These fast paths complicate the already complex state-machine code even further.
ESP aims to address these problems without incurring too much performance penalty. As we shall see, the ESP code corresponding to the C code (Figure 3) can be written much more succinctly and readably (Appendix B).
3. GOALS AND APPROACH
ESP is a language designed to support event-driven State-machines programming. It has the following goals:
Ease of development To aid programming, the language should permit the concurrency to be expressed simply. It should also provide support for modularity, dynamic memory management and a flexible interface to C.
Permit extensive testing Concurrent programs often suffer from hard-to-find race conditions and deadlock. ESP should support the use of software verifiers so that the programs can be tested extensively. Currently, ESP uses the SPIN verifier. SPIN [14] is a flexible and powerful verification system designed to verify correctness of software systems. It uses model-checking to explore the state-space of the system.
Low performance penalty These concurrent programs are designed to be run on a single processor. To have low performance overhead, concurrent programs in ESP should permit aggressive compile time optimizations.
In traditional languages, like C, using event-driven state-machines forces a tradeoff that requires giving up ease of development and reliability to achieve high performance. ESP is designed to provide all of these three properties simultaneously.
To meet these design goals, the ESP language is designed so that it can not only be used to generate an executable but also be translated into specification that can be used by the SPIN verifier (Figure 4). The ESP compiler takes an ESP program (pgm.ESP) and generates 2 files. The generated C file (pgm.C) can then be compiled together with the C code provided by the user (help.C) to generate the executable. The programmer-supplied C code implements simple device-specific functionality like accessing device registers. The SPIN file (pgm.SPIN) generated by the ESP compiler can be used together with programmer-supplied SPIN code (test.SPIN) to verify different properties of the system. The programmer-supplied SPIN code generates external events such as network message arrival as well as specifies the properties to be verified. Different properties of the system can be verified by using pgm.SPIN together with different test.SPIN files.
4. EVENT-DRIVEN STATE-MACHINES PROGRAMMING (ESP) LANGUAGE
ESP is based on the CSP [13] language and has a C-style syntax. ESP supports Event-driven State-machines Programming. The basic components of the language are processes and channels. Each process represents a sequential flow of control in a concurrent program. Processes communicate with each other by sending messages on channels. All the processes and channels are static and known at compile time.
Appendix B presents the implementation of the example (Section 2.2) in ESP. In this section, we will use fragments from that code to illustrate the various language features.
4.1 Types, Expressions and Statements
ESP supports basic types like int and bool as well as mutable and immutables versions of complex datatypes like record, union and array. Types can be declared as follows:
```plaintext
type sendT = record of { dest: int, vaddr: int, size: int}
type updateT = record of { vaddr: int, paddr: int}
type userT = union of { send: sendT, update: updateT, ...}
```
ESP does not provide any global variables. All variables have to be initialized at declaration time (New variable declaration is indicated with a $ prefix). Types do not have to be specified when they can be deduced (ESP does a simple type inferencing on a per statement basis). For instance:
```plaintext
$s1: int = 7; // Declare Variable
i = 45; // Update Variable
$sj = 36; // Type inferred
```
ESP provides the common imperative constructs like if-then-else statements and while loops. However, it does not provide recursive data types or functions. Recursive data types are not supported because they cannot be translated easily into the specification language of the SPIN verifier. Functions are not supported because processes provide a more appropriate abstraction mechanism in a concurrent setting (Section 4.3).
4.2 Channels
Communication over channels are synchronous—a sender has to be attempting a send (using the out construct) concurrently with a receiver attempting to receive (using the in construct) on a channel before the message can be successfully transferred over the channel. Consequently, both in and out are blocking operations. Using synchronous channels has several benefits. First, they simplify reasoning about message ordering, especially when processes can have complex interactions. Second, they can be implemented more efficiently than buffered channels. When buffering is required, it can be implemented explicitly by the programmer. Finally, buffered channels increase the size of state-space that has to be explored during verification.
The alt construct allows a process to wait on the in/out readiness of multiple channels. However, for each execution of an alt statement, only the actions associated with a single channel are performed. In the case where multiple channels are ready, a single channel is selected. The channel selection algorithm need not be fair (it may favor performance critical channels), but must prevent starvation [20]. The following is a code fragment from a process that implements a FIFO queue. The macros FULL, EMPTY and INCR have the expected functionality. The first alternative accepts new messages and inserts them at the tail of the queue. The second alternative sends the message at the head of the queue and then removes it from the queue. Note that the first alternative is disabled when the buffer is full and second is disabled when the buffer is empty.
```plaintext
while {
alt {
case( !FULL, in( chan1, Q[tl]) { INCR(tl); }
case( !EMPTY, out( chan2, Q[hd]) { INCR(hd); }
}
}
```
One of the features of the language is the use of pattern matching to support dispatch. Pattern-matching is used in languages like ML to provide more expressive switch statements. ESP uses it to support dispatch. Patterns have the same syntax as the one used for allocating unions and records. They can be differentiated based on their position in a statement. They are considered a pattern when they occur in an lvalue position and cause allocation when they occur in a rvalue position.
```plaintext
$sr: sendT = ( 7, 54677, 1024); // Declare Variable
$url: userT = { send | sr};
$url2: userT = { send | { send | { $dest, $vaddr, $size}}; userT = $url2;
```
In the above code, the first line initializes $sr to a newly allocated record. The second line initializes $url to a newly
allocated union with a valid send field\(^2\) that points to the record in sr. The third line initializes ur2 to a newly allocated union with a valid send field that points to a newly allocated record. The fourth line has a pattern on the left hand side and pattern matching causes variables dest, vAddr and size to be initialized to 5, 10000 and 512 respectively.
Patterns can be specified in an in operation. For example, consider process A performs
\[
\text{in( userReqC, \{ send \to \{ dest, vAddr, size \} \});}
\]
to accept only send requests while a process B performs
\[
\text{in( userReqC, \{ update \to \{ vAddr, pAddr \} \});}
\]
to accept only update requests. When process C performs
\[
\text{out( userReqC, req);}
\]
the object will be delivered to process A or B depending on which pattern it matches. This frees the process C from trying to figure out the appropriate processes and sending the message to that process. To support this functionality efficiently, ESP requires that all the patterns used on a channel have to be disjoint and exhaustive—an object has to match exactly one pattern. In addition, each pattern can be used by one process only. So, although a channel can have multiple readers and writers, a channel together with a pattern defines a port which can have multiple writers but only a single reader.
Objects sent over channels are passed by value. Since there are no global variables, this ensures that processes can communicate only by sending messages over channels. To support this efficiently, ESP allows only immutable objects to be sent over channels. This applies not only to the object specified in the out operation but also to all objects recursively pointed to by that object.
A cast operation allows casting an immutable object into a mutable object and vice versa. Semantically, the cast operation causes a new object to be allocated and the corresponding values to copied into the new object. However, the compiler can avoid creating a new object in a number of cases. For instance, if the compiler can determine that the object being cast is no longer used afterwards, it can reuse that object and avoid allocation.
### 4.3 Processes
Processes in ESP implement state machines—each location in the process where it can block implicitly represents a state in the state machine.
\[
\text{process add5 \{} \\
\text{\hspace{1cm} in( chan1, \$1);} \\
\text{\hspace{1cm} out( chan2, \$5);} \\
\text{\}}
\]
The above process represents a state machine with 2 states. The first state is when it is blocked waiting on an in operation on channel chan1 and the second when it is blocked on an out operation on channel chan2.
Processes in ESP are lightweight in that they do not need a stack to run. This is because ESP does not support functions, allowing the local variables of a process to be allocated in the static region. Thus a context switch only requires saving the current location in one process and jumping to the saved location in another.
In ESP, the processes are used to support abstraction—functions are not supported. For example, consider the following code fragment from a process which implements a page table which maps virtual addresses into physical addresses (Appendix B). The mapping is maintained in the array table. When it receives a request to translate virtual address to physical address, it uses the virtual address to lookup the mapping and sends a reply back to the requesting process. The ret specifies the process making the request so that the reply can be directed back to that process. The second case accepts requests to update the mapping and updates the table.
\[
\text{alt \{} \\
\text{\hspace{1cm} case( in( ptReqC, \{ ret, vAddr \}); \{} \\
\text{\hspace{2cm} // Request to lookup a mapping} \\
\text{\hspace{2cm} out( ptReplyC, \{ ret, table(vAddr) \});} \\
\text{\}} \\
\text{\hspace{1cm} case( in( userReqC, \{ update \to \{ vAddr, pAddr \} \}); \{} \\
\text{\hspace{2cm} // Request to update a mapping} \\
\text{\hspace{2cm} table(vAddr) = pAddr;} \\
\text{\}} \\
\text{\}}
\]
To mimic the behavior of functions that expect return values, a pair of out and in operations. For instance:
\[
\text{out( ptReqC, \{ 0, vAddr \}; \}} \\
\text{in( ptReplyC, \{ 0, pAddr \}; \}}
\]
On the other hand, functions that do not expect a return value can be modeled using an out operation
\[
\text{out( userReqC, \{ update \to \{ vAddr, pAddr \} \});}
\]
ESP processes are a more appropriate abstraction mechanism than functions in a concurrent setting because an ESP process can block on an event, while allowing such behavior in a function cannot be done without a stack (Section 2.2). In addition, the process abstraction allows flexibility in scheduling computation. For instance, if no return values are expected (see last example), the code to update the table can be delayed until later.
### 4.4 Memory Management
Memory allocation bugs are often the hardest to find especially in the context of concurrent programming. However, supporting automatic memory management usually involves too much overhead (both in terms of space and time). On the other hand, explicit memory management with malloc and free are hard to program correctly with.
ESP provides a novel explicit management scheme to allow efficient but bug-free memory management. The key observation is that memory bugs are hard to find because memory safety is, usually, a global property of a concurrent program—memory safety cannot be inferred by looking only at a part of the program. To rectify this, ESP is designed to make memory safety a local property of each process.
When objects are sent over channels, deep copies of the objects are delivered to the receiving process.\(^6\) Hence, there
\[^6\]c is a constant different for each process (a process id).
\[^7\]This is true only semantically. The implementation never has to actually copy the object.
is no overlap between the objects accessible to different processes. Therefore, each process is responsible for managing its own objects. Bugs in the other processes do not effect it.
ESP provides a reference counting interface to manage memory. At allocation time, the reference count is set to 1. ESP also provides 2 primitives (link and unlink) to manipulate the reference counts. The link primitive increases the reference count of the object while the unlink decreases the reference count of the object. If this causes the reference count of an object to become 0, it frees the object and recursively invokes unlink on the objects pointed by it.
ESP is designed so that link and unlink are the only source of unsafeness in language. However, since the unsafe is local to each process, the SPIN verifier can be used to verify safety of each process separately. This makes it less vulnerable to state-explosion in the verifier. In fact, the SPIN verifier was able to verify the safety of all processes used to implement the VMMC firmware fairly easily (Section 5.3).
4.5 External Interface
The firmware implementation has to deal with special registers, volatile memory and layout of packets sent/received on the network. ESP addresses this by providing an external interface to interact with C code.
In addition, the specification derived from the ESP code has to interact with some programmer provided SPIN code during verification (Figure 4).
ESP provides a single external interface for both SPIN and C code. It uses the channel mechanism to support external interfaces. This is different from the traditional approaches of either allowing C code to be directly embedded in the program [6, 2] or allowing functions that are implemented externally to be called [3, 8].
Using channels to provide external interfaces has a number of advantages. First, ESP processes often block on external events like arrival of user request or network packets. Using channels allows a process to use the existing constructs to block on external events. Second, external code can also use the same dispatch mechanism built into channels through pattern-matching. Finally, it promotes modularity. For instance, if retransmission is no longer required, the retransmission processes can be dropped and the channels used to interact can be converted into external channels. Other processes that were using these channels are not affected because they cannot tell the difference between an external channel and a regular channel.
A channel can be declared to have an external reader or writer but not both. For example:
```
channel userReqC: userT // External C writer
interface userReq( int userReqTO { Send( { Send > { $dest, $vAddr, $size}), Update( { Update > $new}), }
...
```
defines a channel with an external writer. The $ prefix in the pattern indicates a parameter to be passed to the C function.
5The inability of reference counting to deal with cycles poses no problems to ESP because it does not have circular data structures.
6Objects received over channels are treated as newly allocated objects.
Interface to C. To support a synchronous C interface, ESP requires two types of functions to be provided. The first type has a “isReady” suffix and returns whether the channel has data to send/receive. The second type of function is called after the first one has indicated if it is ready to communicate. So, in the previous example, the following functions have to be provided by the programmer.
```
int UserReqIsReady( void);
void UserReqSend( int *dest, int *vAddr, int *size);
void UserReqUpdate( int **new);
...
```
UserReqIsReady should return 0 when it has nothing to send. When it has something to send, it returns a integer that specifies which one of the patterns is ready. A separate function has to be provided for each of the patterns specified. The use of patterns in this context serves 2 purposes. First, it supports dispatch on external channels. Second, it minimizes the amount of allocation and manipulation of ESP data structures that has to be done in C. For instance, by specifying the entire pattern in UserReqSend, there is no need for that function to allocate any ESP data structure.
UserReqUpdate, on the other hand, will have to allocate, correctly initialize and return an ESP record. This can not only introduce allocation bugs in the system but also move the allocation beyond the reach of the ESP compiler, thereby, preventing the allocation from being optimized away.
External in channels differ from external out channel in 2 ways. First, the IsReady function just returns whether or not the channel is willing to accept data. Then any writer on that channel can write to it. In addition, it does not need to pass pointers since the parameters will not be modified. So, all the parameters have one less level of indirection.
SPIN Interface. Since SPIN has support for channels, external SPIN code can interact directly with SPIN by reading and writing to the appropriate channels.
4.6 Case Study: VMMC Firmware
We have reimplemented the VMMC firmware using ESP. The implementation supports most of the VMMC functionality (only the redirection feature is currently not supported). The earlier implementation included about 15600 lines of C code (Around 1100 of these lines were used to implement the fast paths).7
The new implementation using ESP uses 500 lines of ESP code (200 lines of declarations + 300 lines of process code) together with around 3000 lines of C code.8 The C code is used to implement simple tasks like initialization, initiating DMA, packet marshalling and unmarshalling and shared data structures with code running on the host processor (in the library and the driver). All the complex state machine interactions are restricted to the ESP code which uses 7 processes and 17 channels. This is a significant improvement over the earlier implementation where the complex interactions were spread throughout the 15600 lines of hard-to-read code.
7To make a fair comparison, we counted only those lines of the earlier implementation that correspond to functionality implemented in the new VMMC implementation using ESP.
8ESP currently does not provide any support for fast paths.
5. DEVELOPING AND TESTING USING A VERIFIER
We have a working prototype of the ESP compiler. It generates both C code that can be compiled into firmware as well as a specification that can be used by the SPIN verifier (Figure 4). In this section, we start with a description of the SPIN model checker. We then describe how ESP code is translated into SPIN specification. Finally, we present our experience with using the SPIN model checker to develop and extensively test the VMMC firmware.
5.1 SPIN Model Checking Verifier
Model checking is a technique for verifying a system composed of concurrent finite-state machines. Given a concurrent finite-state system, a model checker explores all possible interleaved executions of the state machines and checks if the property being verified holds. A global state in the system is a snapshot of the entire system at a particular point of execution. The state space of the system is the set of all the global states reachable from the initial global state. Since the state space of such systems is finite, the model checkers can, in principle, exhaustively explore the entire state space.
The advantage of using model checking is that it is automatic. Given a specification for the system and the property to be verified, model checkers automatically explore the state space. If a violation of the property is discovered, it can produce an execution sequence that causes the violation and thereby helps in finding the bug.
The disadvantage is that the state space to be explored is exponential in the number of processes and the amount of memory used (for variables and data structures). So, the resources required (CPU as well as memory resources) by the model checker to explore the entire state space can quickly grow beyond the capacity of modern machines.
SPIN [14]. It is a flexible and powerful model checker designed for software systems. SPIN supports high-level features like processes, rendezvous, channels, arrays, and records. Most other verifiers target hardware systems and provide a fairly different specification language. Although ESP can be translated into these languages, additional state would have to be introduced to implement features like the rendezvous channels using primitives provided in that specification language. This would make the state explosion problem worse. In addition, the semantic information lost during translation would make it harder for the verifiers to optimize the state-space search.
SPIN supports checking for deadlocks and verifying simple properties specified using assertions. More complex properties, like absence of starvation, can be specified using Linear Temporal Logic (LTL).
SPIN is an on-the-fly model checker and does not build the global state machine before it can start checking for the property to be verified. So, in cases where the state space is too big to be explored completely, it can do partial searches. It provides 3 different modes for state-space exploration. The entire state space is explored in the exhaustive mode. For larger systems state spaces, the bit-state hashing mode performs a partial search using significantly less memory. The simulation mode explores single execution sequence in the state space. A random choice is made between the possible next states at each stage. Since it does not keep track of the states already visited and could explore some states multiple times while never exploring some other states. However, the simulation mode in SPIN usually discovers most bugs in the system. Most simulators are designed to accurately mimic the system being simulated. So, hard to find bugs that occur infrequently on the real system also occur infrequently on the simulators. The SPIN simulator is different in that it makes a random choice at each stage and is, therefore, more effective in discovering bugs.
5.2 Translating ESP into SPIN Specifications
The ESP code can be translated into the SPIN specification at various stages of the compilation process. The ESP compiler does this very early—right after type checking—for several reasons. First, the SPIN specification language does not support pointers. So, the translation is much more difficult at the latter stage because it would require the compiler to carry some of the type information through the transformations on the intermediate representations. Second, the addition of temporary variables during the compilation increases the size of the state space that must be explored. The one disadvantage is that any bugs introduced by the compiler cannot be caught by the verifier.
The ESP compiler generates SPIN specification that can instantiate multiple copies of the ESP program. This is achieved easily in SPIN by using an array of every data structure. Then each instance can access its data by using its instantiation id. The ability to run multiple copies of an ESP program under SPIN allows one to mimic a setup where the firmware on multiple machines are communicating with each other.
The translation into SPIN specification is fairly straightforward with a few exceptions. These stem from the lack of pointers and dynamic allocations. While ESP allows the size of the arrays to be determined at run time, SPIN requires it to be specified at compile time. This problem is addressed by using arrays of a fixed maximum size. This size can be specified per type.
Another problem arising from the lack of pointers in SPIN is dealing with mutable data types. For instance,
```
$1: array of int = @{S -> 0, ...}; // Allocate
$2 = a1;
$a2[3] = 7; // Update
```
Here, an update to $a2 has to be visible to $a1. Since, SPIN does not support pointers, different memory is allocated for $a1 and $a2 and an assignment causes the entire structure to be copied. This causes a problem with mutable data structures because an update to one structure $a2 has to be visible in the other $a1. We address this by assigning an objectid to all objects at allocation time. So, when objects get copied, the objectid also gets copied. Later, when a structure is updated, we update the all structures with the same objectid. Although, this may appear very inefficient, it does not increase the state-space that has to be explored and, therefore, does not significantly impact the verifiability of the system.
Memory safety of each individual process can be verified independently using the verifier (Section 4.4). To verify memory safety, we maintain a table that maps the objectid of the objects to reference count. Before each object access, the compiler inserts an assertion to verify that the object is live. The objectid is reclaimed when the reference
earlier safety, some would be freed. One positive side-effect of having to use fixed size reference count table is that the verifier can often catch memory leaks. This is because a memory leak can cause the system to run out of objectIds during verification.
5.3 Case Study: VMMC Firmware
The motivation for using a verifier is to allow more extensive testing than achievable with conventional methods. In the earlier VMMC implementation, we encountered new bugs every time we tried a different class of applications or ran it on a bigger cluster. The state-space exploration performed by verifiers allows more extensive testing.
We used SPIN throughout the development process. Traditionally, model checking is used to find hard-to-find bugs in working systems. However, since developing firmware on the network interface card involves a slow and painstaking process, we used the SPIN simulator to implement and debug it. Once debugged, the firmware can be ported to the network interface card with little effort.
As explained earlier (Figure 4), the programmer has to supply some test code (test.SPIN) for each property to be checked. The code not only specifies the property to be verified but also simulates external events such as network message arrival. The test code is usually less than 100 lines each. Once written, these can be made part of the testing suite and used to recheck the system whenever changes are made to it.
We have successfully used the SPIN verifier in a number of situations. They include:
Development of Retransmission Protocol. The retransmission protocol (a simple sliding window protocol with piggyback acknowledgement) was developed entirely using the SPIN simulator. The SPIN test code used was 65 lines. Once debugged, the retransmission protocol was compiled into the firmware. It ran successfully on the network card without encountering any new bugs. The retransmission protocol in the earlier implementation required about 10 days to get a working version. Since we developed our code using SPIN, it required 2 days.
Checking Memory Safety. Since memory safety is a local property of each process, each process can be checked separately for memory safety. To verify the memory safety of the biggest process in the firmware required 40 lines of test code. The entire state space was 2251 states and could be explored using exhaustive search mode in the SPIN verifier. It took 0.5 second to complete and required 2.2 Mbytes of memory. It should be noted that an exhaustive search would not only catch all the memory safety bugs but also some memory leaks. The result is a safe system that does not incur the overhead of garbage collection.
The firmware had been debugged by the time our memory safety verifier was developed. So we ran the verifier on an earlier version of the system that had a bug. The bug was identified by the verifier. We also introduced a variety of memory allocation bugs that access data that was already freed or introduce memory leaks. The verifier was able to find the bug in every case.
State-space explosion prevented us from checking for system-wide properties like absence of deadlocks. We are currently working on extracting more abstract models so that the state-space search is more tractable. This has allowed us to find several bugs in the firmware that can cause deadlocks [15].
6. GENERATING EFFICIENT FIRMWARE
As described earlier (Figure 4), the ESP compiler uses C as backend and generates C code that can be used to generate the firmware. In this section, we describe the ESP compiler and then compare the performance of the new VMMC implementation using ESP with the earlier implementation.
6.1 ESP Compiler
Processes. The ESP compiler requires the entire program for compilation. It does whole-program analysis and generates one big C function that implements the entire concurrent program. One approach is to treat each process as an automaton and to combine them to generate one large automaton [3, 18]. Although this approach provides zero-overhead context switching, it can result in exponential growth in code size [11]. The ESP compiler takes a simpler approach. It generates the code for the processes separately and context switches between them. Since these processes are essentially state machines, the stack does not have to be saved during a context switch—only the program counter needs to be saved and restored. This has a fairly low overhead and involves only a few instructions.
The generated code has an idle loop that polls for messages on external channels. When a message is available, it checks to see if a process is waiting for that message. If there is, it restarts that process by jumping to the location where the process was blocked. The process then executes till it reaches a synchronization point. If one or more processes are blocked waiting to synchronize, it picks one randomly and completes the message transfer. At this point, both the synchronizing processes can continue executing. ESP currently uses a simple stack-based scheduling policy. This scheduling policy picks one of these two processes to continue execution and adds the other one to the ready queue (queue of processes that are waiting to execute). The processes are executed non-preemptively. When the running process eventually blocks, the next process in the ready queue is executed. This is repeated till there are no more processes to run and the program returns to the idle loop.
The ESP compiler performs some of the traditional optimizations like copy propagation and dead code elimination on each process separately before combining them to generate the C code. Although, the C compiler also performs these optimizations, the semantic information lost when the processes are combined to generate the C code makes it hard for the C compiler to perform these optimizations effectively.
Channels. One way of implementing channels is to have a set of queues (one for each pattern used on the channel) that writers can wait on. This approach makes all fairly expensive. This is because, before blocking on an alt statement, the process has to be added to multiple queues (one for each case in the alt). When it is later unblocked, it has to
9Although there can be multiple readers on a channel, there can only be one reader per-pattern on a channel. So a queue is not needed for the readers.
be removed from all these queues (which can require looking through the queue since it might be in the middle of the queue).
The ESP compiler takes a different approach. It uses a bit-mask per process—one bit for every channel the process may block on. Blocking at an `alt` statement requires simply setting the right bit mask for the process, while unblocking requires zeroing out all the bits. This approach can have two problems. First, checking if a channel has a writer now requires checking the bit masks of multiple processes (as opposed to just checking the corresponding queue). However, since each process uses only a few bits (much fewer than 32), the bit masks for several processes can be collocated on a single integer at compile time. Colocating the right processes can reduce the number of different masks to be checked to 1 or 2. Second, we lose the FIFO ordering of the queues, and extra effort must be made to avoid introducing starvation. However, most of the time only one other process is waiting. No extra overhead is incurred in the common case.
Another simple optimization that helps `alt`'s performance is postponing as much computation as possible until after the rendezvous. For instance, if an object has to be allocated before being sent over the channel, the allocation is postponed so that the allocation does not happen if one of the other alternatives succeeds.
Messages on channels. Semantically, messages sent over the channels require deep copies to be handed to the receiving processes. However, the implementation can simply increment the reference count of the objects to be sent over channel and just send pointers to those objects. This works because only immutable objects can be sent over channels.
The ESP compiler also avoids some unnecessary allocation associated with pattern matching. For instance, if a process wants to send more that one value over a channel, it has to put it in a record. If the receiving process is using a pattern to access the components, the compiler can avoid allocating the record. This is possible because the static design of the language allows the compiler to look at all the patterns being used to receive messages on a channel along with all the senders on that channel.
6.2 Case Study: VMMC Firmware
Figure 5 compares the performance of the earlier VMMC implementation (`vmmcOrig`) with the performance of the new implementation using ESP (`vmmcESP`) using 3 microbenchmarks. In addition, we also present the performance of the earlier implementation with the fast paths disabled (`vmmcOrigNoFastPaths`). The ESP implementation currently does not implement fast paths.
The first microbenchmark measures the latency of messages of different sizes between applications running on 2 different machines. This is measured by running a simple pingpong application that send messages back and forth between 2 machines. Figure 5(a) shows that `vmmcESP` is around twice as slow as `vmmcOrig` for 4 byte messages and 38% slower for 4 Kbyte messages. However, `vmmcESP` is only 35% slower than `vmmcOrigNoFastPaths` in the worst case (for 64 byte messages) but has comparable performance for 4 byte and 4 Kbyte messages.
The second microbenchmark measures the bandwidth between two machines for different message sizes. In this case, an application running on one machine continuously sends

data of particular size to the second machine which simply receives it. Figure 5(b) shows that vmmcESP delivers 41% less bandwidth as vmmcOrig for 1 Kbyte messages and 14% for 64 Kbyte messages. However, vmmcESP is only 25% slower than vmmcOrigNoFastPaths for 1 Kbyte messages and 12% for 64 Kbyte messages.
The final microbenchmark measures the total bandwidth between two machines for different message sizes in a different scenario. In this case, applications on two machines continuously send data to each other simultaneously. Figure 5(c) shows that vmmcESP delivers 23% less bandwidth as vmmcOrig for 1 Kbyte messages but similar performance for 64 Kbyte messages. Also, vmmcESP is 20% slower than vmmcOrigNoFastPaths for 1 Kbyte messages but similar performance for 64 Kbyte messages.
The microbenchmark performance shows that vmmcESP performs significantly worse than vmmcOrig in certain cases (latency of small messages). However, most of the performance difference is due to the brittle fast paths. Also, the performance difference is significantly less in the bidirectional bandwidth microbenchmark where the firmware has to deal with messages arriving on the network as well as the host at the same time. In the other two microbenchmark, the firmware has to deal with only one type of message at a given instant.
The microbenchmarks represent the worst case scenario. The impact of the performance difference on real applications should be much smaller [17, 5] for a number of reasons. First, the vmmcOrig numbers represent the performance of some hand-optimized fast paths in the system. These fast paths tend to be fairly brittle and applications often fall off the fast path. While some applications [16] (which repeatedly send very large messages) that have very simple communication patterns benefit from the fast paths, a lot of applications do not. SVM applications [4] experience a lot of contention in the network and the actual latency measured by the different applications varied between 3 times to 10 times slower than the microbenchmarks numbers for small messages. So, for most applications, the vmmcOrigNoFastPaths is a more accurate representative than vmmcOrig when comparing performance with vmmcESP.
Second, the microbenchmarks represent applications that spend 100% of their time communicating, while most real applications spend only a fraction of their time communicating and are, therefore, less sensitive to firmware performance [17, 5].
Finally, we plan to implement more aggressive optimizations that should decrease the performance gap. For instance, data-flow analysis is currently performed on a per process basis. We plan to extend data-flow analysis across processes.
7. RELATED WORK
Devices are usually programmed using event-driven state machines in languages like C, and sometimes, in assembly. We are not aware of any other high-level language for programming network devices.
Concurrency Theory. A number of languages like CSP [13] and Squeak [6] have been designed to gain better understanding of concurrent programming. Both of these languages support processes communicating with each other. However, they were not designed with efficient implementa-
tion in mind.
Concurrent Languages. A number of languages like CML [19], Java [1] and OCCAM [20] support concurrency. CML [19] provides first-class synchronous operations. OCCAM [20] was designed to implement concurrent programs that run on a parallel machine. Java [1], like most other programming languages, provides user-level threads to express concurrency. All these systems are fairly expressive and hard to be compiled efficiently for devices.
Code Generation-Verification. A number of other languages [3, 8, 2] have taken a similar approach of generating efficient executables as well as specifications that can be used by a verifier. However, they differ from ESP significantly.
Esterel [3] was designed to model the control of synchronous hardware and has been used recently to efficiently implement a subset of TCP protocol [7]. It adopts the synchronous hypothesis—the reaction to an external event is instantaneous—and ensures that every reaction has a unique, and therefore, deterministic reaction. This makes the programs easy to analyze and debug. The esterel programs can be compiled to generate both software and hardware implementations. However, using esterel to implement device firmware has several drawbacks. First, the reactions are not instantaneous in practice. For instance, if a DMA becomes available while an event was being processed, it cannot be used to process the current event. The “DMA available” event would be registered on the next clock tick and would be then available for use. This results in inefficient use of the DMA. Second, the synchronous hypothesis forces some constraints on valid programs. For instance, every iteration of a loop has to have a “time consuming” operation like signal emission. In addition, this constraint has to be verifiable by the compiler. This disallows simple loops that initialize an array. Finally, the language is designed to encode only the control portion of the program. The data handling has to be performed externally using the C interface. This forces some of the complex tasks including memory management to be implemented in C.
Teapot [8] is a language for writing coherence protocols that can generate efficient protocols as well as verify correctness. It uses a state machine to keep track of the state of a coherence unit (a cache line or a page). The state machine is specified using a set of handlers similar to the C interface described in Section 2.2. However, they use continuations to reduce the number states that the programmer has to deal with. While this approach works well when applied to coherence protocol, it suffers for some of the same problems described in Section 2.2 when used to implement device firmware. Teapot also does not provide any support for complex datatypes and dynamic memory management.
Promela++ [2] is a language designed to implement layered network protocols. The adjacent layers communicate using FIFO queues. Although, the layered framework works well for writing network protocols, they are too restrictive for writing firmware code where the different modules have much more complex interactions. Also, they do not provide any support for dynamic memory management.
Software Testing. Some systems [12, 9] have been successful in finding bugs in existing software written in traditional languages like C. Verisoft [12] does this by modifying
the scheduler of the concurrent system to do a state-space exploration. Meta-level Compilation [9] attempts to verify system-specific invariants at compile time. However, these systems do not simplify the task of writing concurrent programs.
8. CONCLUSIONS
We have presented the design and implementation of ESP—a language for programmable devices. ESP has a number of language features that allow development of compact and modular concurrent programs. ESP programs can be developed and debugged using the SPIN model-checking verifier. The compiler automatically generates SPIN specifications from ESP programs. Once debugged, ESP programs can be compiled into efficient firmware that runs on the programmable device.
We have reimplemented VMMC firmware for the Myrinet network interface cards using ESP. Our main conclusions are the following:
- Programming event-driven state machines can be fairly easy with the right language support. We found that the firmware can be programmed with significantly fewer lines of code. In addition, since C code is used only to perform simple operations, all the complexity is localized to a small portion of the code (about 300 lines in our implementation). This is a significant improvement over the earlier implementation where complex interactions were scattered over the entire C program (15600 lines).
- Model-checking verifiers like SPIN can be used to extensively test the firmware. However, state-space explosion limits the size of the models that can be checked. SPIN was used to develop and debug a retransmission protocol. The new implementation took around 2 days (compared to the earlier implementation which took around 10 days). SPIN was also used to exhaustively check the memory safety on the firmware.
- The performance overhead of using ESP is relatively small. Our microbenchmarks measurements indicate that most of the performance difference with the earlier implementation of VMMC is due to brittle fast paths that rarely benefit applications. Based on earlier application studies [17, 5], we expect the impact of the extra performance overhead to be relatively small.
9. ACKNOWLEDGEMENTS
We would like to thank Rudrajit Samanta, Tammo Spalink, Daniel Wang, Dirk Balfanz and the anonymous reviewer whose comments have helped improve the paper.
10. REFERENCES
void handleReq() { // Req has arrived
switch ( reqSM1->type )
case SendReq:
pAddr = translateAddr( reqSM1->vAddr);
if ( dmaIsFree() ) fetchData();
else setState( SM1, WaitForDMA);
return; // Block State machine
case UpdateReq:
updateAddrTrans( reqSM1->vAddr, reqSM1->pAddr);
...
}
void fetchData() { // DMA is available
sendData = dmaData( pAddr, reqSM1->size);
if ( isState( SM2, WaitForDMA)) setState( SM1, WaitForSM1);
}
void syncSM2() { // SM2 is ready for next request
sendData = dmaData( pAddr, reqSM2->dest = reqSM1->dest);
deliverEvent( SM2, SMReady);
setState( SM1, WaitForReq); // Wait for next request
}
process pageTable { // virtual to physical address mapping
$table: #array of int = *[$TABLE_SIZE -> 0, ...];
while ( true ) {
alt {
case( in( ptReqC, { ret, $vAddr}) ) {
// Request to lookup a mapping
out( ptReplyC, { ret, $table[$vAddr] });
}
case( in( userReqC, { update | { $vAddr, $pAddr} }) ) {
// Request to update a mapping
$table[$vAddr] = $pAddr;
}
}
}
}
process SM1 {
while ( true ) {
in( userReqC, { send | { $dest, $vAddr, $size} });
out( ptReqC, { 0, $vAddr} );
in( ptReplyC, { 0, $pAddr });
out( dmaReqC, { 0, $pAddr, $size} );
int( dmaDataC, { 0, $sendData} );
out( SM2, { dest, $sendData} );
unlink( sendData);
}
}
|
{"Source-Url": "https://www.cs.tufts.edu/comp/150PLD/Papers/ESP.pdf", "len_cl100k_base": 11581, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 66659, "total-output-tokens": 13347, "length": "2e13", "weborganizer": {"__label__adult": 0.0004131793975830078, "__label__art_design": 0.0002646446228027344, "__label__crime_law": 0.0002658367156982422, "__label__education_jobs": 0.000392913818359375, "__label__entertainment": 5.906820297241211e-05, "__label__fashion_beauty": 0.0001609325408935547, "__label__finance_business": 0.00018107891082763672, "__label__food_dining": 0.000362396240234375, "__label__games": 0.0005865097045898438, "__label__hardware": 0.002254486083984375, "__label__health": 0.0004513263702392578, "__label__history": 0.00021588802337646484, "__label__home_hobbies": 9.566545486450197e-05, "__label__industrial": 0.0004367828369140625, "__label__literature": 0.00020253658294677737, "__label__politics": 0.0002263784408569336, "__label__religion": 0.0004892349243164062, "__label__science_tech": 0.01529693603515625, "__label__social_life": 6.0677528381347656e-05, "__label__software": 0.00383758544921875, "__label__software_dev": 0.97265625, "__label__sports_fitness": 0.0003230571746826172, "__label__transportation": 0.000667572021484375, "__label__travel": 0.0002067089080810547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59409, 0.0173]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59409, 0.6254]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59409, 0.90891]], "google_gemma-3-12b-it_contains_pii": [[0, 4676, false], [4676, 9513, null], [9513, 12899, null], [12899, 17309, null], [17309, 23241, null], [23241, 29491, null], [29491, 36153, null], [36153, 42547, null], [42547, 45938, null], [45938, 52526, null], [52526, 57845, null], [57845, 59409, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4676, true], [4676, 9513, null], [9513, 12899, null], [12899, 17309, null], [17309, 23241, null], [23241, 29491, null], [29491, 36153, null], [36153, 42547, null], [42547, 45938, null], [45938, 52526, null], [52526, 57845, null], [57845, 59409, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59409, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59409, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59409, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59409, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59409, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59409, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59409, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59409, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59409, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59409, null]], "pdf_page_numbers": [[0, 4676, 1], [4676, 9513, 2], [9513, 12899, 3], [12899, 17309, 4], [17309, 23241, 5], [23241, 29491, 6], [29491, 36153, 7], [36153, 42547, 8], [42547, 45938, 9], [45938, 52526, 10], [52526, 57845, 11], [57845, 59409, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59409, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
5dd75631ee7a481fa42fda43850d6ad52546724b
|
[REMOVED]
|
{"len_cl100k_base": 10693, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 52907, "total-output-tokens": 12118, "length": "2e13", "weborganizer": {"__label__adult": 0.0002841949462890625, "__label__art_design": 0.00034356117248535156, "__label__crime_law": 0.0003094673156738281, "__label__education_jobs": 0.000518798828125, "__label__entertainment": 4.041194915771485e-05, "__label__fashion_beauty": 0.00012576580047607422, "__label__finance_business": 0.00023186206817626953, "__label__food_dining": 0.0002751350402832031, "__label__games": 0.0004277229309082031, "__label__hardware": 0.0008463859558105469, "__label__health": 0.00034737586975097656, "__label__history": 0.00019991397857666016, "__label__home_hobbies": 9.691715240478516e-05, "__label__industrial": 0.0004220008850097656, "__label__literature": 0.0001883506774902344, "__label__politics": 0.0001838207244873047, "__label__religion": 0.00040435791015625, "__label__science_tech": 0.01493072509765625, "__label__social_life": 6.270408630371094e-05, "__label__software": 0.00536346435546875, "__label__software_dev": 0.9736328125, "__label__sports_fitness": 0.00023186206817626953, "__label__transportation": 0.00044155120849609375, "__label__travel": 0.0001615285873413086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46328, 0.01373]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46328, 0.39289]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46328, 0.88614]], "google_gemma-3-12b-it_contains_pii": [[0, 3375, false], [3375, 7730, null], [7730, 11969, null], [11969, 14514, null], [14514, 17318, null], [17318, 18865, null], [18865, 22545, null], [22545, 25540, null], [25540, 27057, null], [27057, 30946, null], [30946, 35055, null], [35055, 38837, null], [38837, 42287, null], [42287, 46328, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3375, true], [3375, 7730, null], [7730, 11969, null], [11969, 14514, null], [14514, 17318, null], [17318, 18865, null], [18865, 22545, null], [22545, 25540, null], [25540, 27057, null], [27057, 30946, null], [30946, 35055, null], [35055, 38837, null], [38837, 42287, null], [42287, 46328, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46328, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46328, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46328, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46328, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46328, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46328, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46328, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46328, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46328, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46328, null]], "pdf_page_numbers": [[0, 3375, 1], [3375, 7730, 2], [7730, 11969, 3], [11969, 14514, 4], [14514, 17318, 5], [17318, 18865, 6], [18865, 22545, 7], [22545, 25540, 8], [25540, 27057, 9], [27057, 30946, 10], [30946, 35055, 11], [35055, 38837, 12], [38837, 42287, 13], [42287, 46328, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46328, 0.01739]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
ffb366a9879628c7bc5f6efff050edf5de7b2ed7
|
[REMOVED]
|
{"len_cl100k_base": 12980, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 47225, "total-output-tokens": 13692, "length": "2e13", "weborganizer": {"__label__adult": 0.0002760887145996094, "__label__art_design": 0.0003216266632080078, "__label__crime_law": 0.0001951456069946289, "__label__education_jobs": 0.0007319450378417969, "__label__entertainment": 3.904104232788086e-05, "__label__fashion_beauty": 0.00012230873107910156, "__label__finance_business": 0.0002446174621582031, "__label__food_dining": 0.00019931793212890625, "__label__games": 0.0003888607025146485, "__label__hardware": 0.0004639625549316406, "__label__health": 0.0002262592315673828, "__label__history": 0.00019633769989013672, "__label__home_hobbies": 7.832050323486328e-05, "__label__industrial": 0.00026035308837890625, "__label__literature": 0.0001779794692993164, "__label__politics": 0.00015115737915039062, "__label__religion": 0.00027441978454589844, "__label__science_tech": 0.00467681884765625, "__label__social_life": 6.896257400512695e-05, "__label__software": 0.00557708740234375, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0001908540725708008, "__label__transportation": 0.0003249645233154297, "__label__travel": 0.00016820430755615234}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63215, 0.00803]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63215, 0.09449]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63215, 0.91273]], "google_gemma-3-12b-it_contains_pii": [[0, 6150, false], [6150, 12307, null], [12307, 15581, null], [15581, 19858, null], [19858, 26013, null], [26013, 31965, null], [31965, 37231, null], [37231, 43199, null], [43199, 50574, null], [50574, 56524, null], [56524, 63215, null], [63215, 63215, null], [63215, 63215, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6150, true], [6150, 12307, null], [12307, 15581, null], [15581, 19858, null], [19858, 26013, null], [26013, 31965, null], [31965, 37231, null], [37231, 43199, null], [43199, 50574, null], [50574, 56524, null], [56524, 63215, null], [63215, 63215, null], [63215, 63215, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63215, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63215, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63215, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63215, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63215, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63215, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63215, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63215, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63215, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63215, null]], "pdf_page_numbers": [[0, 6150, 1], [6150, 12307, 2], [12307, 15581, 3], [15581, 19858, 4], [19858, 26013, 5], [26013, 31965, 6], [31965, 37231, 7], [37231, 43199, 8], [43199, 50574, 9], [50574, 56524, 10], [56524, 63215, 11], [63215, 63215, 12], [63215, 63215, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63215, 0.0828]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
44850cbe7e615db157ceadaa86af2485a5bf9fe7
|
Abstract
Web browsers have become a de facto universal operating system, and JavaScript its instruction set. Unfortunately, running other languages in the browser is not generally possible. Translation to JavaScript is not enough because browsers are a hostile environment for other languages. Previous approaches are either non-portable or require extensive modifications for programs to work in a browser.
This paper presents DOPPIO, a JavaScript-based runtime system that makes it possible to run unaltered applications written in general-purpose languages directly inside the browser. DOPPIO provides a wide range of runtime services, including a file system that enables local and external (cloud-based) storage, an unmanaged heap, sockets, blocking I/O, and multiple threads. We demonstrate DOPPIO’s usefulness with two case studies: we extend Emscripten with DOPPIO, letting it run an unmodified C++ application in the browser with full functionality, and present DOPPIOJVM, an interpreter that runs unmodified JVM programs directly in the browser.
1. Introduction
Web browsers have become an increasingly attractive platform for application developers. Browsers make it comparatively easy to deliver cross-platform applications, because they are effectively ubiquitous. Practically all computing platforms—from desktops and tablets to mobile phones—ship with web browsers. Browsers are also getting faster. Most now incorporate optimizing just-in-time compilers for JavaScript, and expose features like access to the GPU through WebGL and high-speed video chat via WebRTC [10, 14]. This combination of features makes it possible for browsers to host the kind of richly interactive applications that used to be restricted to native environments.
In effect, web browsers have become a de facto universal computing platform: its operating system is the browser environment, and its sole “instruction set” is JavaScript. However, running languages other than JavaScript in the browser is not generally possible.
There are numerous reasons why browser support for programming languages other than JavaScript would be desirable. JavaScript is a dynamically-typed, prototype-based language whose design contains numerous pitfalls for programmers. Problems with JavaScript have led language implementors to design new languages for the browser that overcome JavaScript’s shortcomings, but these solutions all require that programmers learn a new language. Programmers who prefer to program in other paradigms (e.g., functional, object-oriented) currently must abandon these or build hacks onto JavaScript to accommodate their needs. There is also a vast body of well-debugged, existing code written in general-purpose programming languages. Making it possible to reuse this code, rather than requiring that it all be re-written in JavaScript, would speed application development and reduce the risk of introducing errors.
Translation, interpretation, or compilation of languages to JavaScript is necessary but not sufficient. Browsers lack many key abstractions that existing programming languages expect, impose significant limitations, and vary widely in their support for and compliance with standards:
- **Single-threaded Execution**: JavaScript is a single-threaded event-driven programming language with no support for interrupts. Events either execute to completion, or until they are killed by the browser’s watchdog thread because they took too long to finish.
- **Asynchronous-only APIs**: Browsers provide web applications with a rich set of functionality, but emerging APIs are exclusively asynchronous. Due to the limitations of JavaScript, it is not possible to create synchronous APIs from asynchronous APIs.
- **Missing OS Services**: Browsers do not provide applications with access to a file system abstraction. Instead, they offer a panoply of limited persistent storage mechanisms, making it difficult to manage large amounts of persistent data. Browsers also lack other OS services such as sockets.
- **Browser Diversity**: Users access the web from a wide range of browser platforms, operating systems, and devices. Each combination may have unique performance characteristics, differing support for JavaScript and Document Object Model (DOM) features, and outright bugs. This diversity makes it difficult to address any of the issues above without excluding a large portion of the web audience.
Although previous work aims at supporting other languages than JavaScript in the browser, these all fall short. Conventional programming languages and their standard libraries expect the relatively rich execution environment that modern operating systems provide. The fact that browsers lack standard operating system features like threads, file systems, and blocking I/O means that these projects cannot run existing programs without substantial modifications (§2.1).
This paper identifies and describes how to resolve the impedance mismatch between the browser and the native environment that conventional programming languages expect. We present DOPPIO, a runtime system that makes it possible to execute unmodified applications written in conventional programming languages inside the browser. Its execution environment overcomes the limitations...
Table 1. Feature comparison of systems that execute existing code inside the browser. The asterisk and dagger indicate limitations that prevent execution on browsers used by over half of the web population today: “*” denotes a feature that requires a (non-default) backwards-compatibility flag in order to work in those browsers, while a “†” indicates that the feature will not work for them [3]. DOPPIO and the DOPPIOJVM implement all of these features in a cross-platform approach, letting it run unmodified programs in the vast majority of browsers.
<table>
<thead>
<tr>
<th>Category</th>
<th>Feature</th>
<th>JVM</th>
<th>Java</th>
<th>LLVM IR</th>
<th>MSIL</th>
<th>Racket</th>
</tr>
</thead>
<tbody>
<tr>
<td>OS SERVICES</td>
<td>File system (browser-based) (§5.1)</td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Unmanaged heap (§5.2)</td>
<td>✓</td>
<td></td>
<td>*</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Sockets (§5.3)</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td></td>
<td></td>
</tr>
<tr>
<td>EXECUTION SUPPORT</td>
<td>Automatic event segmentation (§4.1)</td>
<td>✓</td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Synchronous API support (§4.2)</td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Multithreading support (§4.3)</td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Works entirely in the browser</td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>LANGUAGE SERVICES</td>
<td>Exceptions (§6.6)</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Reflection</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
The contributions of this paper are the following:
1. We identify the execution support and operating system abstractions that conventional programming languages and their runtime libraries require, yet are not present in browsers.
2. We describe how to emulate these resources in the browser on top of JavaScript, and implement them in a runtime system called DOPPIO.
3. As a proof-of-concept, we port the Java Virtual Machine to the browser using DOPPIO, allowing multiple languages and unmodified programs written in those languages to function completely in the browser.
4. We extend the Emscripten system with DOPPIO, making it possible to run a broader range of C/C++ applications inside the browser without modification.
5. We propose several unintrusive browser extensions that would greatly simplify supporting other programming languages inside the browser.
2. Related Work
While DOPPIO is the first runtime system and DOPPIOJVM the first language implementation to allow unmodified code written in a conventional programming language to execute across browsers, previous projects have (partially) implemented existing languages in the browser. Table 1 presents an overview of the features implemented by some well-known projects; only DOPPIO and the DOPPIOJVM implement all of the features required to run unaltered programs.
2.1 Conventional Languages
One of the most prominent and earliest implementations of conventional languages inside the browser is Google Web Toolkit (GWT), a source-to-source compiler from Java to JavaScript [11]. The goal of GWT is to let web developers write AJAX web applications using a restricted subset of Java. GWT developers can write small widgets and page components in Java which GWT compiles directly to JavaScript. However, GWT does not support compiling arbitrary Java programs to JavaScript. Using GWT imposes a number of limitations, in addition to the usual difficulties of statically compiling Java. With GWT, widgets must be coded carefully to avoid long-running functions that may make the web page unresponsive, programs can only be single-threaded, and most Java libraries are unavailable. GWT has its own class library that is modeled after the APIs available in the web browser. This class library emulates only a limited subset of the classes available in the Java Class Library, including essential Java data structures, interfaces, and exceptions [8].
Mozilla Research’s Emscripten project lets developers compile applications from the LLVM Intermediate Representation to JavaScript so they can run in the browser [27]. Emscripten primarily supports C and C++ applications, though in principle it can support any code compiled into LLVM’s IR. Emscripten emulates a number of core operating system services such as the heap and the file system, and provides partial graphics and audio support. However, long-running applications freeze the webpage because Emscripten does not automatically convert the program into finite-
duration events to prevent blocking browser events (see Section 3 for details). Emscripten also does not support multithreaded applications, so each application “thread” must run to termination before other program code can be executed; yielding to other “threads” is not possible. As a result, program event handlers for mouse and keyboard events will not fire— that is, the browser will freeze— unless the application is completely rewritten in an event-driven manner to conform to the browser environment. Finally, Emscripten does not emulate synchronous source language functions like the file system API in terms of the asynchronous APIs available in the browser, which prevents applications from operating on files or updating the display with the expected semantics. Alon Zakai, the lead developer of Emscripten, specifies that “JavaScript main loops must be written in an asynchronous way: A callback for each frame”, and that “if you do want [synchronous display updates] in a game you port, you’d need to refactor them to be asynchronous.”
Mozilla Research’s ASM.js project provides language implementations with a stripped-down subset of JavaScript that, when coupled with explicit browser support, removes garbage collection overhead and allows the program to be compiled ahead-of-time (rather than just-in-time) 4. To accomplish this, ASM.js applications do not use JavaScript objects at all; instead, they manipulate binary structs on its emulated unmanaged heap. As it is a subset of JavaScript, ASM.js applications are still restricted by JavaScript’s single-threaded event-driven runtime model (see Section 3). Thus, applications ported to ASM.js face the same runtime-related issues as those ported to JavaScript.
Fournet et al. describe a verified compiler that compiles an ML-like language called F# to JavaScript 6. The project used the λJS JavaScript semantics to formally verify the correctness of the compiler’s transformations 12. However, as this compiler is for a new ML-like language and not for an existing language, it cannot be used to compile and run existing programs in the browser. Furthermore, this compiler does not provide support for any operating system abstractions.
Microsoft’s IL2JS compiles .NET Common Intermediate Language (CIL) into JavaScript 18. This project can compile arbitrary .NET programs into JavaScript, but these programs cannot take advantage of operating system features such as the file system, the unmanaged heap, or standard input and output, since IL2JS does not implement any of the native methods in the .NET Base Class Library (BCL). As with other systems described above, any long-running programs compiled with IL2JS will freeze the browser, since IL2JS does not automatically convert programs into a series of finite-duration events.
Yoo et al. describe WeScheme, a hybrid system that makes it possible to run Racket code in the browser 25. WeScheme comprises a compiler server responsible for compiling Racket code into JavaScript, and a JavaScript-based runtime system that copes with many of the drawbacks of the browser environment that DOPPIO overcomes. WeScheme does not emulate operating system services such as the file system or the unmanaged heap, and lacks support for many Racket language features, including reflection and certain primitive functions.
The Native Client (NaCl), Portable Native Client (PNaCl), and Xax projects let web sites execute sandboxed native code in an efficient manner 2 5 24. NaCl and Xax applications are distributed in machine code form, and are not portable across architectures; the web page must provide a precompiled version of the software for each architecture. PNaCl overcomes this limitation via an architecture-independent bytecode format. However, PNaCl does not support C++ exception handling, dynamic linking, or the most commonly used implementation of the standard library — glibc. All three solutions completely circumvent the JavaScript engine, and thus require explicit browser support to function. These systems provide limited interoperability with JavaScript; as a result, programs running in these systems typically operate as black boxes on web pages, much like Java applets. Unlike these systems, DOPPIO can execute unmodified programs in any modern web browser by taking advantage of its existing JavaScript engine.
By contrast with the systems above, DOPPIO provides a complete platform that makes it possible to run unaltered applications written in conventional programming languages across browsers. DOPPIO ensures that the web page remains responsive regardless of the length of any computation, supports multithreaded applications, and implements the full range of required runtime and operating system abstractions, including synchronous I/O and a file system. The DOPPIO JVM supports running arbitrary, unmodified JVM programs, and supports access to common operating system features through DOPPIO.
2.2 New Languages
Several new languages have been proposed for the browser. Google has created Dart, a language that can be compiled to JavaScript or executed on a custom VM 7. A number of so-called transpilers like CoffeeScript provide a convenient layer of syntactic sugar over JavaScript; CoffeeScript’s motto is “it’s just JavaScript” 13. TypeScript is a typed superset of JavaScript from Microsoft that lets programmers annotate JavaScript programs with types, classes, and interfaces 20. The TypeScript compiler performs type checking before performing a direct translation into JavaScript. DOPPIO itself is written in TypeScript.
These languages let developers write web applications using an alternative syntax to JavaScript, and compile directly to JavaScript in a straightforward manner. As a result, these languages face many of the same challenges as JavaScript for application development.
2.3 OS Approaches
In recent years, a number of operating systems have appeared that use a modified browser as the exclusive platform for applications. FirefoxOS is a Firefox-based operating system for mobile devices that only supports JavaScript and HTML based applications. ChromeOS is a Google Chrome-based operating system that takes the same approach as FirefoxOS, but adds support for Native Client applications. Both expose additional APIs to access OS-specific components; applications tailored to these environments can use them for additional functionality. As all applications for these platforms must be written in JavaScript or compiled to Native Client, they either suffer from the execution problems outlined in Section 3 or are non-portable across browsers.
The Illinois Browser Operating System (IBOS) tightly couples the browser with the operating system to safely sandbox web pages from native applications and to enable the development of new browser security policies 21. Rather than providing a path for bringing existing applications to execute in the browser as JavaScript applications, IBOS lets existing applications run in a native sandbox that exposes a UNIX compatibility layer. In other words, applications are effectively virtualized inside a native environment.
3. Background: Browser Execution Model
This section provides detailed background on the browser environment and JavaScript, focusing on their characteristics and idiosyncrasies that make it impossible to directly execute applications written in conventional programming languages inside the browser. The following sections describe how DOPPIO overcomes these limitations.
3.1 The Execution Model
The JavaScript execution model in the browser is similar to standard GUI application development: JavaScript programs are single-threaded and completely event driven. That is, JavaScript programs execute as a sequence of finite-duration events that block UI interactions. Popular GUI toolkits for other languages, such as Swing, Windows Forms, and Windows Presentation Foundation (WPF), operate in a similar fashion; any computation performed in response to an event blocks all UI repainting and interaction.
Unfortunately, many applications written in conventional languages do not fit this model; that is, they do not decompose naturally into finite chunks of computation, or they rely on multiple simultaneous threads of execution. Even when an application does decompose into finite chunks of computation, there is still a problem: the running time of these finite chunks must be limited depending on the browser and the performance of the platform running it. Browsers stop scripts that make it unresponsive to user input for too long (e.g., 5 seconds) due to long-running events. There are no mechanisms for saving execution state to the heap or for performing meaningful stack introspection. As a result, long-running tasks cannot “pause” themselves for later execution (i.e., block) unless they do not rely on stack state or if the programmer manually performs “stack ripping” [] to convert the application into continuation-passing style. These issues raise significant barriers to bringing existing applications into the web environment, which typically expect a more traditional execution environment.
3.2 Asynchronicity
Most input and output functionality in the browser environment can only be accessed through asynchronous APIs. An asynchronous function receives a callback function as an argument, which it will later invoke with the requested information. Due to JavaScript’s execution model, callback invocation occurs as an event; the event will not execute until the JavaScript thread has finished processing all events that occur before it. The JavaScript application cannot block waiting for the event to return, and it cannot introspect on waiting events to process an event at an earlier time. The application must stop executing—that is, complete all processing—in order to let the JavaScript thread process waiting events.
As a result, it is impossible to emulate a synchronous function call using an asynchronous function call. Any functionality available in the browser solely through asynchronous means can never be emulated through synchronous functions. This limitation severely restricts the class of applications that can be brought into the web environment with minimal changes.
As a concrete example of how serious this restriction is, consider the following C++ application. This example does not map cleanly into the browser because it relies on synchronous keyboard input, whereas the browser only exposes asynchronous keyboard events:
```cpp
#include <iostream>
using namespace std;
int main () {
char name[256];
cin.getline (name, 256);
cout << "Please enter your name: ";
cin.getline (name, 256);
cout << "Your name is " << name << endl;
return 0;
}
```
To port an application like this to JavaScript, in addition to changing the requisite library calls, the application would need to be broken up into separate events that can be assigned to callbacks:
```javascript
function main () {
var t = document.getElementById('terminal');
t.onkeypress = function (e) {
t.innerHTML += e.key;
if (e.key === 'Enter') {
t.innerHTML += `<br />
Your name is " + name + "
<br />";
t.onkeypress = null;
} else {
var c = String.fromCharCode(e.charCode);
name += c;
t.innerHTML += c;
}
};
}
```
This case is reasonably straightforward to port, but this type of transformation can become unmanageably complex when blocking is invoked deep within the program. The program must then somehow postpone execution to free up the JavaScript thread until after the callback terminates.
Unfortunately, many browser features, including binary file downloads, are restricted to asynchronous APIs. As a result, it becomes difficult to port applications into the browser that expect to use these features synchronously.
3.3 WebWorkers and their Limitations
One apparent solution to this issue is a browser feature known as WebWorkers. WebWorkers let browser applications offload computation to a separate thread of execution. Unlike threads in other languages, WebWorkers do not share any memory with the JavaScript thread that spawned them. Instead, the only way the JavaScript thread and WebWorkers can communicate is via an asynchronous two-way communication channel that allows either thread to send a message to the other. These messages are processed using a registered callback.
WebWorker execution proceeds much like the main JavaScript thread. However, WebWorkers have no direct access to user input or to elements on the web page, so event execution does not block user input or GUI repainting. As a result, WebWorkers are well suited for long-running tasks.
Unfortunately, WebWorkers do not solve the problems described above. If a script executing in a WebWorker relies on mid-execution input, it must receive that information from the main JavaScript thread through its asynchronous message-passing interface. Web-Workers also do not enable true shared-memory multithreading in the browser, as there is no shared state among workers and the main JavaScript thread.
4. The DOPPIO Execution Environment
As Section 3 explains, it is not possible to perform a direct translation of arbitrary code into JavaScript for execution in the web browser because of issues with the event-driven browser execution model and the semantics of asynchronous JavaScript APIs. The program must either be extensively modified to deal with the differing semantics, or it must execute in a different execution environment that emulates the source language semantics that it expects.
DOPPIO takes the latter approach. In this section, we explain how the DOPPIO's entirely JavaScript-based execution environment automatically segments existing programs into finite-duration events to prevent them from making the browser unresponsive to user input. We next describe how we use this mechanism both to emulate synchronous APIs in the source language in terms of asynchronous JavaScript APIs, and to implement multithreading.
4.1 Automatic Event Segmentation
To cope with the browser’s execution model, DOPPIO must break up the execution of existing programs into finite-duration events. To perform this task, DOPPIO’s execution environment contains a mechanism called suspend-and-resume that allows an executing program to suspend itself to the heap to be resumed later. With
this mechanism, a program executing in this environment can periodically suspend itself to let other events in the browser event queue like user input execute before it resumes.
Because this mechanism is not natively available in JavaScript, languages implemented using DOPPIO must satisfy two properties:
- **The call stack must be explicitly stored in JavaScript objects.** JavaScript lacks comprehensive introspection APIs and has no mechanism for saving stack state. As a result, programs executing in DOPPIO can only reliably use the JavaScript stack for transient state that will not be needed for program resumption.
- **The program must be augmented to periodically check if it should suspend.** JavaScript lacks preemption: once an event starts executing, it will continue executing until it completes or is killed by the browser. As a result, a language implemented using DOPPIO must call the execution environment periodically to check if it should suspend execution to free up the JavaScript thread.
Both of these transformations can be performed automatically by the language implementation. Section 6.1 describes how DOPPIOJVM implements these features for the JVM.
With an explicit call stack representation in hand, the DOPPIO execution environment can suspend a program for later resumption. To do so, it first creates an anonymous function—the resumption callback— that captures the call stack in a closure and that contains the logic needed to resume the program. It then passes the function to an asynchronous browser mechanism that will invoke it later. Various browsers provide different mechanisms that DOPPIO can exploit for this task; we describe these in Section 4.4. Finally, it notifies the language implementation that it should halt execution, with a promise that it will handle resuming it from that point later.
An alternative to this approach is to use ECMAScript 6 generators, which can be used to effectively "pause" a JavaScript function mid-execution with the `yield` statement. This functionality could be used to implement suspend-and-resume by yielding up the call stack. ECMAScript 6 is still in the drafting process, and the proposed generator functionality has only recently been implemented in Firefox and Chrome. As a result, DOPPIO does not use this strategy.
To prevent applications from executing for too long, DOPPIO uses a simple counter to determine when an application needs to suspend. Each suspend check initiated by the language implementation decrements the counter by 1. When the counter reaches 0, DOPPIO determines how long it took for the counter to tick to 0. It then updates a cumulative moving average representing how often the program checks whether or not it should suspend. This new value, along with a preconfigured time slice duration, is then used to set the new counter value.
### 4.2 Simulating Blocking with Asynchronous APIs
As stated earlier, it is not possible to emulate a synchronous JavaScript API in terms of an asynchronous JavaScript API. However, it is possible to emulate a synchronous API in the source language in terms of an asynchronous JavaScript API.
To accomplish this, the DOPPIO execution environment provides a variation on the suspend-and-resume functionality described in Section 4.1. When it wishes to invoke an asynchronous JavaScript function, the language implementation must craft a callback function that encapsulates the logic for migrating the data provided through the asynchronous API into items that the language can understand. DOPPIO wraps this callback in a variation of the resumption callback, and then calls the asynchronous API with the modified callback function.
When the browser triggers the resumption callback, it resumes program execution and forwards the data from the asynchronous call to the callback provided by the language implementation. The program executing in DOPPIO resumes as if it had just received data synchronously from a regular function call in its language.
### 4.3 Multithreading Support
DOPPIO implements multithreading by exploiting the fact that programs executing in DOPPIO maintain an explicit representation of their stack. Since JavaScript lacks a mechanism for preempting execution, multithreading is necessarily cooperative from the JavaScript point of view. However, as language implementations must voluntarily specify valid context switches to DOPPIO, the semantics of multithreading may be preemptive in the source language (as in the Java Virtual Machine).
DOPPIO provides language implementations with a mechanism for switching threads, which is a simple variation of the suspend-and-resume functionality described in Section 4.1. DOPPIO maintains a "thread pool" — essentially an array of call stacks. When the language implementation determines that it is time for a context switch, DOPPIO saves the call stack of the currently running thread into this array, and chooses another thread to resume. Language implementations can provide a scheduling function that determines which thread to resume. By default, DOPPIO resumes an arbitrary thread from the thread pool that is marked as "ready".
### 4.4 Browser Mechanisms for Quick Resumptions
To efficiently implement the suspend-and-resume mechanism described in Section 4.1, DOPPIO needs an asynchronous browser API that is able to insert the resumption callback into the JavaScript event queue as quickly as possible. However, most browsers lack an explicit mechanism for this use case. Below, we describe the options available to DOPPIO; it uses the best choice available in the browser executing it.
`setTimeout` is commonly used for delaying a function’s execution by a certain number of milliseconds. `setTimeout` is implemented by delaying the placement of the callback into the back of the JavaScript event queue by at least the specified delay.
The web browser lacks a number of core operating system features. This mechanism is ideal for DOPPIO’s architecture decouples the frontend interface that programs interact with from the backend implementation that is responsible for interfacing with a particular type of persistent storage.
However, even if the specified delay is 0, its specification dictates a minimum delay of 4ms, which would result in unacceptable performance.
sendMessage is a mechanism for sending string-based messages to other open browser windows or tabs. The JavaScript application must register a global event handler to process these messages. This function is a better option for DOPPIO, as it places a message event on the back of the JavaScript event queue immediately. In most browsers, DOPPIO uses this mechanism to implement suspend-and-resume. Since it uses string-based messages, the DOPPIO execution environment generates unique string IDs for each resumption callback, and maintains a map from IDs to callbacks. When DOPPIO receives a message from itself through this interface, it calls the relevant resumption function through the map.
Unfortunately, sendMessage is synchronous in Internet Explorer 8; messages sent through sendMessage immediately trigger the message handler. For IE8, DOPPIO uses setTimeout instead.
setImmediate is a mechanism for immediately placing an event to the back of the JavaScript event queue with no delay. This mechanism is ideal for DOPPIO, as it has the exact semantics required to implement suspend-and-resume. As this time, Internet Explorer 10 is the only browser that implements this API, although efforts are underway to make it a standard.
5. DOPPIO Operating System Services
The web browser lacks a number of core operating system features that existing programs depend on, such as the file system, access to unmanaged memory, and network sockets. As a result, these abstractions must be implemented in terms of the resources available in the browser so that arbitrary programs can run in the web environment. This section outlines how DOPPIO implements these abstractions.
5.1 File System
Many existing programs depend on the presence of a file system to persist state, but browsers do not provide such a facility. Instead, they provide a hodgepodge of persistent storage mechanisms with different storage formats, restrictions, compatibility across browsers, and intended use cases. Furthermore, many do not expose synchronous interfaces, making it impossible to implement a blocking file system on top of them. Table 2 illustrates the properties and compatibility of a subset of these mechanisms.
However, by combining the execution environment outlined in Section 4 with a unified asynchronous file-based storage abstraction, DOPPIO can provide existing programs with the synchronous file system semantics they expect, with high compatibility across browsers. This approach requires three primary components: (1) a mechanism for manipulating and interpreting binary file data, (2) an implementation of this unified file system API, and (3) a mechanism for defining different “file system” backends for each persistent storage solution, including cloud storage. Figure 2 displays an overview of the DOPPIO file system. We describe these three components below.
Binary Data in the Browser. Because it is a high-level language, JavaScript does not offer extensive support for manipulating binary data. Some browsers contain an API for natively downloading and manipulating binary data, called “Typed Arrays”. Others lack this functionality, and can only download binary data as a JavaScript string. All browsers lack a mechanism for converting between JavaScript strings and binary data, which is required to make use of many of the string-based persistent storage mechanisms in the browser.
To address these deficiencies and inconsistencies, DOPPIO’s file system implements the Node JS Buffer module in the browser. Buffer provides applications with a comprehensive mechanism for manipulating a binary buffer of data. It allows applications to read and write unsigned integers, signed integers, and floating-point numbers of various sizes. It also contains a mechanism for reading and writing binary string data in various formats (ASCII, UTF-8, UTF-16, UCS-2, BASE64, and HEX).
DOPPIO’s implementation of Buffer can either be backed by typed arrays if the browser has support for them, or by a regular JavaScript array of numbers. The string conversion functionality present in the Buffer class serves double-duty as a centralized mechanism that any file system backend can use to read from and write to string-based persistent storage mechanisms. In light of this fact, our Buffer implementation supports a special “binary string” format that efficiently packs 2 bytes of data into each JavaScript UTF-16 character; this functionality is only available in browsers that do not perform validity checks on JavaScript strings, as a number of 2 byte sequences are not valid UTF-16 characters. For other browsers, this string format reverts to storing a single byte per character.
Unified File System API. To provide language implementations with a familiar and consistent file system API, DOPPIO emulates the Node JS file system module, fs, inside the browser. fs is a light JavaScript wrapper around Unix file system calls, like open and atstat. As a result, most languages’ file system APIs map cleanly onto its functionality.
DOPPIO also emulates two other Node modules that are closely related to the file system module: path and process. path contains useful path string manipulation functions. process encapsulates miscellaneous environment features; DOPPIO only implements the functionality required to emulate changing the current working directory, which programs may rely upon to resolve relative file paths.
The original Node fs module exposes two variants of its API: synchronous and an asynchronous version. Since we are unable to provide a synchronous JavaScript interface for many persistent storage mechanisms, our emulated fs module only guarantees the availability of the asynchronous interface for any given backend. Language implementations can combine our asynchronous fs module with the synchronous source language API support outlined in Section 4 to provide existing programs with the synchronous file system API they normally expect. We describe this process for the JVM in Section 6.3 where we discuss the implementation of DOPPIOJVM.
To better take advantage of JavaScript’s strengths, the fs API deviates from the Unix standard in one important way: DOPPIO’s
file descriptors are actual objects. This approach simplifies the implementation of separate backends, which we discuss next.
**Backend API.** A backend for the file system API only needs to implement nine methods that correspond to standard Unix file system commands: rename, stat, open, unlink, readlink, mkdir, readdir, close, sync. A backend can optionally also support chown, chuser, utimes, link, symlink, and sync. The unified file system API handles standardizing arguments to these methods, throwing syntax errors when appropriate, and simulating redundant API functions in terms of these core functions; this service dramatically reduces the amount of logic that each file system needs to implement.
The DOPPIO file system provides backends with a number of useful utility classes: an index that any can use to cache directory listings and files, a standard file implementation that loads the entire file into memory and implements sync-on-close semantics, and the standard Buffer module for manipulating binary file data. However, a file system is not required to use any of these utilities; each has complete freedom to implement the internal data structures in any way so long as it consistently implements the backend API.
Via the utility classes, a file system needs to implement just nine methods to provide a new file system backend with full-featured read/write functionality, NFS-style sync-on-close semantics, and files that are completely loaded into memory before they can be operated on. This approach makes it possible to quickly build new file system backends.
Unlike Unix, DOPPIO uses objects to represent file descriptors. In addition to being a natural design decision for an object-oriented language, these objects let separate file system backends share core file manipulation logic, which determines the syncing and prefetching strategy for the file system.
Using this backend API, we have implemented backends for five separate file storage mechanisms, which can be seen in Figure 2. Two are backed by different browser-local storage mechanisms (described in Table 2), one provides temporary in-memory storage, one offers read-only access to files served by the web server, and one provides access to Dropbox cloud storage.
### Mounting File Systems
DOPPIO’s emulated fs module is only responsible for interacting with a single root file system. However, a number of systems may want to mount multiple file systems in a Unix-style directory tree. This would provide programs with a convenient mechanism for transferring files to different backends, or for implementing an in-memory temporary file system that emulates /tmp.
To facilitate this use case, DOPPIO provides a standard `MountableFilesystem` class that handles performing operations across different file system backends. This file system simply uses the standard backend API to facilitate these interactions; as a result, it will be compatible with any new file systems that are implemented in the future, including cloud storage backends.
### 5.2 Unmanaged Heap
Programs use the unmanaged heap either to perform unsafe memory operations (in managed languages), or as the source of dynamically allocated memory (in unmanaged languages).
DOPPIO emulates the unmanaged heap using a straightforward first-fit memory allocator that operates on JavaScript arrays. Each element in the array is a 32-bit signed integer, which represents 32 bits of data. This approach is convenient because JavaScript only supports bit operations on signed 32-bit integers. When an application calls an API method to write data to the unmanaged heap, DOPPIO converts the data into 32-bit chunks and stores it into the array in little endian format; we chose little endian in order to be consistent with DOPPIO’s alternative Typed Array heap implementation, which necessarily uses little endian. When the data is later retrieved, DOPPIO decodes it back into its original form.
Due to the encoding/decoding process, data stored to and read from DOPPIO’s heap are actually copied; updates must be kept in sync according to the language’s semantics.
### Typed Arrays
Modern browsers support typed arrays that operate on a fixed-size `ArrayBuffer` object. The data in the `ArrayBuffer` can be interpreted as an array of various signed, unsigned, and floating-point data types by initializing a typed array of the appropriate type with the `ArrayBuffer`. As a result, DOPPIO can use typed arrays to efficiently convert between numeric types.
Note that typed arrays are little endian; this detail is not configurable. DOPPIO uses `ArrayBuffer` objects for its heap when available to take advantage of these simple numeric conversions.
### 5.3 TCP Sockets
For security reasons, browsers do not provide JavaScript applications with direct access to network sockets. Instead, modern browsers provide a feature called WebSockets that enable JavaScript applications to make outgoing full-duplex TCP connections with WebSocket servers. For security reasons, JavaScript applications cannot accept incoming WebSocket connections.
Newly-opened WebSockets perform a standardized handshake that “promote” an HTTP connection to the WebSocket server. Once the handshake completes, the JavaScript application can send and receive WebSocket messages, which are encapsulated in WebSocket data frames.
Existing socket-based servers and clients expect a standard TCP handshake and the ability to define custom application-layer data frame formats. As a result, they will not be able to send or receive WebSocket connections out-of-the-box.
Resolving this problem requires a solution for clients running in the browser that make outgoing socket connections, and servers running on native hardware that expect incoming socket connections. DOPPIO resolves the client side of the issue by emulating a Unix socket API in terms of WebSocket functionality. The freely-available Websockify program provides a solution for the server end of the problem; it wraps unmodified programs, and translates incoming
---
**Table 2.** Comparison of persistent storage mechanisms available in the browser. Note that this is only a partial listing – we do not include popular storage options enabled through native plugins, such as Flash or Silverlight; nor do we list the numerous cloud storage options. `Synchronous` describes whether or not the mechanism exposes a synchronous interface in the main JavaScript thread. `Compatibility` illustrates the mechanism’s compatibility across the current desktop browser market.
WebSocket connections into normal TCP connections [16]. In addition, Websockify provides a JavaScript library that proxies WebSocket connections through a Flash applet in older browsers that lack WebSocket support. DOPPIO uses this library to supply programs with socket support in a wide variety of browsers.
6. DOPPIOJVM
To demonstrate DOPPIO’s suitability as a full-featured operating environment for executing unaltered applications written in conventional programming languages, we built DOPPIOJVM. DOPPIOJVM is a robust prototype Java Virtual Machine (JVM) interpreter that operates entirely in JavaScript. DOPPIOJVM implements all 201 bytecode instructions specified in the second edition of the Java Virtual Machine Specification [15], supports multithreaded programs, runs multiple languages that run on top of the JVM, and implements many of the complex mechanisms and native functionality that JVM programs expect. This level of compatibility would not have been possible without the support provided by the DOPPIO execution environment and operating system abstractions. This section describes a number of DOPPIOJVM’s key features, and how they rely on support provided by DOPPIO.
6.1 Segmented Execution
Due to the JavaScript execution model, DOPPIOJVM must execute as finite-duration events to prevent the browser from stopping its execution. DOPPIOJVM uses DOPPIO’s suspend-and-resume functionality to achieve this. However, it must satisfy the requirements outlined in Section 4.1 before it can use this mechanism.
DOPPIOJVM contains a straightforward JavaScript representation of the JVM call stack. The JVM Specification states that each stack frame contains an operand stack, and an array of local variables. JavaScript arrays are unbounded, and support push and pop operations; thus, DOPPIOJVM’s stack frame is a JavaScript object that contains an array for the operand stack, an array for the local variables, and a reference to the method that the stack frame belongs to. The call stack is simply an array of these stack frame objects. A positive side effect of explicitly representing the call stack in this manner is that DOPPIOJVM trivially supports the Java Class Library reflection APIs for stack introspection.
To ensure that it suspends in a timely fashion, DOPPIOJVM checks at each function call boundary whether it should suspend. This is not a perfect solution, as it is possible in theory to execute an extremely long-running loop that makes no method calls. This concern does not arise in practice; however, it would be possible to instrument loop back edges to perform the same checks.
6.2 Multithreading
DOPPIOJVM uses DOPPIO’s “thread pool” to emulate multiple JVM threads. DOPPIOJVM checks for waiting threads at fixed context switch points, such as JVM monitor checks, atomic operations, and any other form of lock-checking. The current implementation does not prevent the starvation that can occur if a running thread never reaches one of these context switch points. That said, DOPPIOJVM supports a wide range of complex multithreaded programs, some of which we evaluate in Section 7. We plan to switch to a more general mechanism, such as switching threads each time the JVM invokes the DOPPIO suspend-and-resume mechanism.
6.3 Native Methods
The Java Class Library exposes JVM interfaces to a wide variety of native functionality, such as the file system, unsafe memory operations, and network connections. These methods cannot be implemented using JVM bytecodes, and are marked as “native”. DOPPIOJVM implements a wide variety of these native methods directly in JavaScript. The methods corresponding to the file system API use the DOPPIO file system, the methods corresponding to unsafe memory operations use the DOPPIO heap, and the methods corresponding to network connections use DOPPIO sockets. When a native method needs to use an asynchronous browser API, DOPPIOJVM uses the suspend-and-resume mechanism in the manner described in Section 4.2 to “pause” execution until the browser triggers the resumption callback. In this way, the native methods retain their JVM-level synchronous semantics.
6.4 Class Loading
When a bytecode instruction references a class for the first time, the JVM invokes a complex dynamic class loading process to resolve the reference to a class definition. This process is specified in Chapter 5 of the JVM specification [15].
However, the class loading mechanism described in the specification assumes the presence of a file system. To resolve a class reference, the class loader is required to check the folders and JAR archive files specified on the class path for the class’s representative .class file. In addition, decoding these class file definitions requires functionality that can convert the binary representations of various numeric formats and a standard string format into JavaScript numbers and strings. Neither of these features are available in standard browser environments.
The DOPPIOJVM class loader uses the DOPPIO file system and its Buffer module to appropriately download and parse JVM class files. In particular, DOPPIOJVM uses a file system backend that is backed by dynamic file downloads to make the entire Java Class Library available in the browser. When the class loader opens a class file for reading, the file system backend launches an asynchronous download request for the particular file to load it into memory before passing it to the class loader for further execution. This design prevents DOPPIOJVM from loading unneeded class files into memory or browser-local persistent storage before execution.
6.5 Unsafe Memory Operations
The sun.misc.Unsafe API lets the JVM perform unsafe memory operations via access to an unmanaged heap. The OpenJDK Java Class Library requires this API, which it uses to determine the underlying system’s endianness at startup. DOPPIOJVM uses DOPPIO’s unmanaged heap implementation to provide this functionality to JVM programs via the same API.
6.6 Exceptions
The JVM is natively aware of exceptions and exception-handling logic. However, because DOPPIOJVM uses DOPPIO to execute as finite-length events, it cannot use JavaScript’s native exception mechanisms to emulate JVM exceptions.
Instead, DOPPIOJVM emulates JVM exception handling semantics by iterating through its virtual stack representation until it finds a stack frame with an applicable exception handler, or until it empties the stack and exits with an error.
6.7 JVM Objects and Arrays
DOPPIOJVM maps JVM objects to JavaScript objects, where each object contains a reference to its class and a dictionary that contains all of its fields keyed on their names. JVM arrays are a special type of JVM object; these are mapped to a JavaScript object that contains an array of values and a reference to the special array class that the JVM constructs according to the array’s component type. DOPPIOJVM takes full advantage of the JavaScript garbage collector, which automatically collects JVM objects when they fall out of scope.
6.8 Interoperability with JavaScript
While DOPPIOVM is capable of executing programs entirely written in a JVM language, it can also interoperate with the JavaScript environment. DOPPIOVM exposes an eval method that lets JVM programs execute snippets of JavaScript. This method returns a JVM `String`, which contains the return value of the operation coerced into string form. DOPPIOVM also makes it possible for a JavaScript program to invoke the JVM much as one would invoke Java on the command line via an API: the programmer specifies the classpath, main class, and arguments, and optionally, custom functions to redirect standard input and output.
7. Evaluation
7.1 Case Study 1: DOPPIOVM
We evaluate DOPPIOVM’s completeness and performance on a set of real and unmodified complex JVM programs across a wide variety of browsers. We compare DOPPIOVM’s performance to Oracle’s HotSpot JVM interpreter provided with OpenJDK, which is able to run JVM programs in the browser using an applet plugin. While Section 2 describes a variety of systems that bring existing programming languages into the browser, these systems are unable to run our benchmarks, so we are unable to compare their performance to DOPPIOVM.
Our benchmarks and their respective workloads are as follows: javap (4KLOC) is the Java disassembler. We run javap on the compiled class files of javac, which comprises 491 class files. We use the version of javap and the class files of javac that ship with OpenJDK 6. javac (4KLOC) is the Java compiler. We run javac on the 19 source files of javap. We use the version of javac that comes bundled with OpenJDK 6, and the source of javap from the same bundle. Rhino (57KLOC) is an implementation of the JavaScript language on the JVM. We run Rhino 1.7 on the `recursive` and `binary-trees` programs from the SunSpider 0.9.1 benchmark suite. Kawa-Scheme (121KLOC) is an implementation of the Scheme language on the JVM. We evaluate Kawa-Scheme 1.13 on the `nqueens` algorithm with input 8. Our benchmark computer is a Mac Mini running OS X 10.8.4 with a 4-core 2GHz Intel Core i7 processor and 8GB of 1333 MHz DDR3 RAM. We evaluate DOPPIOVM in Chrome 28.0, Firefox 22.0, Safari 6.0.5, Opera 12.16, and Internet Explorer 10, with Internet Explorer 10 running in a Windows 8 virtual machine using the Parallels 8 software.
DOPPIOVM is able to successfully execute all of these applications to completion; we did not need to make any modifications to these applications. Figure 3 presents execution times across various browsers versus Oracle’s HotSpot interpreter. DOPPIOVM achieves its highest performance on Chrome: compared to the HotSpot interpreter, DOPPIOVM runs between 24× and 42× slower (geometric mean: 32×). This performance degradation is explained by two facts: first, DOPPIOVM is largely untuned; second, it pays the price for executing on top of JavaScript and inside the browser. By contrast, the HotSpot interpreter is a highly tuned native executable.
As Figure 3 shows, Chrome performs better than other browsers across most of the benchmarks we examine. However, it would be problematic to draw any conclusions about Chrome’s superiority with respect to other browsers, as we used Chrome as the development platform for DOPPIOVM. As a result, we may have inadvertently made design decisions that benefited Chrome over other browsers.
While running the javap benchmark, we discovered a bug in Safari that causes significant performance degradation. Safari does not properly garbage collect typed arrays; they remain in memory until the browser closes. DOPPIOVM’s file system implementation makes heavy use of typed arrays in browsers that support them to efficiently represent binary data. This detail poses a problem for the javap benchmark in this browser, as it manipulates a considerable number of files. As a result, Safari’s memory footprint grows to over 6GB over the course of each javap benchmark run, causing the OS to page memory to disk and degrade DOPPIOVM’s performance. We have reported this issue to Apple.
7.2 Case Study 2: DOPPIO and C++
To further demonstrate DOPPIO’s utility and generality, we combined DOPPIO with Emscripten, extending its ability to port C++ applications to the browser. As a case study, we used it to run the C++ game *Me and My Shadow* in the browser. The Emscripten developers previously ported the core of this game to the web, but the port was incomplete: because Emscripten does not support synchronous dynamic file loading and does not back files to a persistent storage mechanism, the Emscripten demo needs to load all of the game assets into memory prior to execution and does not support game saving.
We modified Emscripten to use the DOPPIO file system, which is able to download the static game assets synchronously as the game requires them, and back the game’s configuration folder to `localStorage`. We did not need to modify the game in order to do this; we took the same source code that the Emscripten developers used to make their demo, compiled it with our augmented version of Emscripten, and configured the DOPPIO file system to mount the game’s resources and the browser’s persistent storage at appropriate folders in the file system hierarchy. The resulting demo does not preload any files, and is able to write to the file system to save game progress and settings.
8. Discussion
Based on insights gleaned while implementing DOPPIO and the DOPPIOVM, we believe that browsers could add several features that would make it far easier and more efficient for browsers to support conventional languages. These features are limited in scope, are fairly circumscribed in terms of implementation, and we expect they would have little impact on JavaScript programmers or users, while making it far easier to run other languages. By contrast, consider adding multiple threads of execution to JavaScript: while this would ease porting multithreaded applications, it would likely...
lead to shared-memory related concurrency errors inside JavaScript applications.
**Synchronous message-passing API.** A synchronous message-passing API for WebWorkers would allow WebWorkers to subscribe to and periodically check for events through the main JavaScript thread without requiring them to yield the JavaScript thread for event processing. This feature would make it trivial to implement synchronous language functionality in terms of asynchronous browser functionality, as a WebWorker could use the main JavaScript thread to perform the asynchronous operation and periodically poll for a response.
**Stack introspection.** A sufficiently complete stack introspection mechanism would allow programs to persist their state on the JavaScript heap as objects. Language implementations could then use this feature to implement multithreading and automatic event segmentation without needing to explicitly store the stack state themselves.
**Numeric support.** Direct support for 64-bit integers would enable languages to efficiently represent a broader range of numeric types in the browser. The DOPPIO JVM uses a comprehensive software implementation of 64-bit integers to bring the long data type into the browser, but it is extremely slow when compared to normal numeric operations in JavaScript.
### 9. Conclusion
While web browsers have become ubiquitous and so are an attractive target for application developers, they support just one programming language—JavaScript—and offer an idiosyncratic execution environment that lacks many of the features that most programs require, including file systems, blocking I/O, and multiple threads. They also are incredibly diverse, further complicating the task of programming web-based applications.
This paper presents DOPPIO, a runtime system for the browser that breaks the browser language barrier. DOPPIO addresses the challenges needed to execute programs written in general-purpose languages inside the browser by providing key system services and runtime support that abstracts away the many differences across browsers. Using DOPPIO, we built DOPPIO JVM, a proof-of-concept complete implementation of a Java Virtual Machine in JavaScript. DOPPIO JVM makes it possible for the first time to run unmodified, off-the-shelf applications written in a conventional programming language directly inside the browser. DOPPIO JVM is already deployed as the compilation and execution engine for the educational website CodeMoo.com, which teaches students how to program in Java. We further demonstrate DOPPIO’s utility by combining it with Emscripten, extending its ability to port C++ applications to the browser.
### References
|
{"Source-Url": "https://people.cs.umass.edu/~emery/pubs/doppio-pldi-2014-draft.pdf", "len_cl100k_base": 11675, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33286, "total-output-tokens": 13471, "length": "2e13", "weborganizer": {"__label__adult": 0.00028824806213378906, "__label__art_design": 0.0001970529556274414, "__label__crime_law": 0.0001627206802368164, "__label__education_jobs": 0.0003924369812011719, "__label__entertainment": 5.441904067993164e-05, "__label__fashion_beauty": 0.00011217594146728516, "__label__finance_business": 0.00014138221740722656, "__label__food_dining": 0.0002061128616333008, "__label__games": 0.0004839897155761719, "__label__hardware": 0.0008883476257324219, "__label__health": 0.0001957416534423828, "__label__history": 0.0001786947250366211, "__label__home_hobbies": 5.245208740234375e-05, "__label__industrial": 0.00022029876708984375, "__label__literature": 0.0001621246337890625, "__label__politics": 0.00013840198516845703, "__label__religion": 0.000308990478515625, "__label__science_tech": 0.005451202392578125, "__label__social_life": 4.976987838745117e-05, "__label__software": 0.006103515625, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.0001971721649169922, "__label__transportation": 0.0003273487091064453, "__label__travel": 0.000152587890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63321, 0.02047]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63321, 0.39645]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63321, 0.87359]], "google_gemma-3-12b-it_contains_pii": [[0, 5284, false], [5284, 9965, null], [9965, 17490, null], [17490, 24434, null], [24434, 30311, null], [30311, 36946, null], [36946, 43513, null], [43513, 50567, null], [50567, 56546, null], [56546, 63321, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5284, true], [5284, 9965, null], [9965, 17490, null], [17490, 24434, null], [24434, 30311, null], [30311, 36946, null], [36946, 43513, null], [43513, 50567, null], [50567, 56546, null], [56546, 63321, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63321, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63321, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63321, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63321, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63321, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63321, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63321, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63321, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63321, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63321, null]], "pdf_page_numbers": [[0, 5284, 1], [5284, 9965, 2], [9965, 17490, 3], [17490, 24434, 4], [24434, 30311, 5], [30311, 36946, 6], [36946, 43513, 7], [43513, 50567, 8], [50567, 56546, 9], [56546, 63321, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63321, 0.04783]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
13c6046cebee588799639886fdcb0a28bfb42097
|
TEAM LEARNING IN INFORMATION SYSTEMS DEVELOPMENT - A LITERATURE REVIEW
Kai Spohrer
*University of Mannheim*
Behnaz Gholami
*University of Mannheim*
Armin Heinzl
*University of Mannheim, Business School*
Follow this and additional works at: [http://aisel.aisnet.org/ecis2012](http://aisel.aisnet.org/ecis2012)
Recommended Citation
[http://aisel.aisnet.org/ecis2012/223](http://aisel.aisnet.org/ecis2012/223)
Abstract
Information Systems Development (ISD) is fast moving, knowledge-intensive and requires a substantial amount of teamwork. In order to develop quality software, teams need to leverage the skills and knowledge of each team member. ISD teams who engage in learning at a group level can perform more effectively and efficiently. However, relative to other disciplines, knowledge and literature about team learning in ISD research is new and dispersed. This fact hampers the cumulative progress in research that seeks to answer questions about how ISD teams learn to work together and improve their performance. We draw on and extend the classification scheme of Edmondson et al. (2007) and conduct a review of ISD team learning research literature. We synthesize the main findings and highlight the limitations of existing approaches. We emphasize potential directions for future research while focusing on the resulting implications for ISD management and methodology. We further demonstrate that there are four distinctive streams in ISD team learning research that differ in the manner that they conceptualize team learning, underlying theories, and research methodologies. Finally, we illustrate how these differing streams can cross-fertilize and thereby present notable aspects of team learning presently addressed by related disciplines for which there is scant or non-existent ISD research.
Keywords: team learning, information systems development, transactive memory system, development methodology.
1 Introduction
Information Systems Development (ISD) keeps changing and evolving rapidly in a huge variety of aspects, including the technologies which are developed, the methods that are applied, and the structures in which it is organized (Avison and Fitzgerald 2006). With agile principles integrated into ISD practices in many companies, the emphasis today is more than ever on the team as a source of creativity and software quality (Dybå and Dingsøyr 2008). Consequently, not only the individual team members must keep pace with frequent changes, but so must the team as a whole. In order to develop innovative solutions, all team members’ skills and expertise must be leveraged and brought to bear at the team level. ISD teams who actively learn about technologies, customer activities, and group processes on a team level can, therefore, develop better software and increase their performance (Janz and Prasarnphanich 2009; Liang et al. 2010). While other disciplines have made attempts to organize their body of knowledge on team learning (e.g. Edmondson et al. 2007; Wilson et al. 2007; Goodman and Dabbish 2011), team learning research in ISD has been dispersed and unorganized thus far. Scholars have accentuated the need for team learning theories that take the specifics of disparate tasks and industry domains into account (Edmondson et al. 2007), but without a consolidated body of research this is hardly possible for ISD. In this paper, we capture, therefore, the current state of research on team learning in ISD in order to foster cumulative scientific progress in our discipline. We concentrate on the following questions: why and how do ISD teams learn; and what are the consequences for future research on ISD management and methodologies. Drawing on and extending Edmondson et al. (2007)’s classification scheme of team learning literature, we critically synthesize substantive findings and reveal limitations that are present in the current streams of ISD team learning literature with regard to conceptualizations, research methodology, logic, and hidden assumptions. Moreover, we contrast research in ISD team learning with current team learning literature from neighboring disciplines. We depict concepts that are frequently investigated in those disciplines but have remained nearly unattended in our field. Finally, we highlight implications for research on ISD management and methodologies and thereby provide fruitful directions for future research in our discipline to understand ISD team learning and its outcomes.
2 Research Methodology
We conducted a systematic review of ISD literature to consolidate the present body of scholarly ISD research on team learning and explore how ISD can benefit from team learning research by related disciplines. We followed a systematic approach as proposed by different scholars (Kitchenham and Charters 2007; Kitchenham et al. 2009; Okoli and Schabram 2010) to create a proper synthesis and scholarly critique of theory (Webster and Watson 2002; Schwarz et al. 2007; Okoli and Schabram 2010). Having defined our research questions, we developed a review protocol. This protocol consisted of: (1) defined research sources; (2) means of access to be used; and (3) basic inclusion and exclusion criteria (Kitchenham and Charters 2007). As a working definition, we conceptualized team learning as any change in the work group’s repertoire of potential behaviors (Wilson et al. 2007), and included papers that addressed antecedents, effects or properties of such change. We excluded educational and technology-focused literature as we wanted to understand the social phenomenon of learning in ISD teams rather than education in classrooms or technological aspects of ISs. In order to restrict the sample of papers to rigorous research, we included only top quality outlets from IS and related disciplines. The list of sources searched included, among others, the AIS senior scholars’ basket of journals, IEEE Transactions on Engineering Management and Software Engineering, Management Science, Decision Sciences, Organization Science and top IS conference proceedings. We searched the selected IS outlets in the database Social Sciences Citation Index via the Web of Science, IEEE Xplore, and the AIS Electronic Library for the search term "learn*". In addition to IS journals, we searched a set of high impact journals from the related disciplines of management and organization science. Since we could build on several rigorous reviews in these disciplines (Edmondson et al. 2007; Wilson et al. 2007; Goodman and Dabbish 2011), we focused on recent papers of these disciplines.
published after 2006. We did not impose such a restriction on IS papers. In order to ensure consistency throughout the literature selection process, the first and the second authors of this paper engaged in the selection procedure (Okoli and Schabram 2010) by: (1) jointly conducting an examination of the first 25 papers; (2) discussing any different opinions about whether or not to include any one of these papers; (3) dividing further selection activities between them; and (4) then consulting one another regarding selections only in ambiguous cases. In order to address the critique of systematic literature reviews (Boell and Cezec-Kecmanovic 2011), we also reviewed the selected papers to determine whether any of them referenced research papers that we had inadvertently overlooked in our initial selection process. We realized that some relevant and referenced papers had not been retrieved by our first queries. Consequently, we performed an additional iteration of search and selection for the terms “knowledge management” and “transactive memory” which helped to include these papers. In total, our queries in titles, abstracts, and keywords returned more than 600 hits. Based on the titles and abstracts, we excluded all contributions that clearly did not address learning on a team level or that focused on the examination of knowledge management information systems without also addressing aspects of our defined research questions. 86 papers were selected for closer examination and 60 of these were found to address learning on a team level. We read the latter and extracted data about their findings, key concepts, methodology, and key aspects regarding conceptualizations of team learning, and of ISD if present. At least two authors validated the extracted data for every paper. Finally, we synthesized collectively the 24 studies that addressed ISD team learning, along with the key concepts and emerging patterns they described (Webster and Watson 2002; Schwarz et al. 2007), and contrasted them to the research from related disciplines. These results are presented below.
3 Team Learning in Information Systems Development
Learning on a team level has long been recognized as a decisive factor influencing the performance of all work groups (Wilson et al. 2007). However, there is a variety of definitions of team learning. These conceptualizations of team learning can, therefore, be a useful criterion to understand the logic and hidden assumptions that underlie different streams of research on this topic. Edmondson et al. (2007) differentiate between three conceptualizations prevalent in literature and find that, according to the selected conceptualization, studies differ in their main dependent and independent variables, methods, and findings. In the following, we show that these streams also exist in ISD team learning research. We discuss their assumptions, methods, findings, theories, and potentially fruitful future directions. Moreover, we identify a fourth, recently emerged stream that differs from the others in its underlying concept of team learning and takes a more structural perspective.
3.1 The Team Learning Curve of Outcome Improvement in ISD
Research on performance improvement is traditionally conducted with an emphasis on learning curves (Argote et al. 1990), without any deeper investigation into the underlying mechanisms of team learning at a group level. Accordingly, the concept of team learning is one of mere outcome improvement (Edmondson et al. 2007). Rather than explaining properties of team learning, research in this realm has aimed at finding determinants of team performance improvements. Based on observed improvements in productivity and logical reasoning, research has assumed team learning to occur, without the empirical proof of direct measurements (Edmondson et al. 2007). For example, Teasley et al. (2002) reason that the performance increase they observe in settings of radically collocated ISD teams is largely caused by better opportunities for team learning. Whether and how such learning takes place, however, exceeds the boundaries of this school of research.
Based on analytical models and a case study, Mookerjee and Chiang (2002) show that tighter coordination policies are more appropriate for ISD teams which are early on the learning curve from a total effort perspective than they are for more advanced teams. Following the same school of thought, Boh et al. (2007) define learning as “the increase in productivity of developers as their experience increases” (Boh et al. 2007, p.1322). Accordingly, different types of experience are frequently examined and
related to the development of team performance. For example, the experience of developers and project managers in their respective roles within their ISD teams demonstrates a much stronger effect on team performance than does their mere respective experience within the company (Huckman et al. 2009). In accordance with this finding, experience with the applied software development methodology is emphasized as outstandingly decisive for team performance, while the knowledge gained from such experience is, interestingly, also being forgotten more rapidly than application or domain knowledge (Kang and Hahn 2009). Similarly, team productivity is found to be higher if ISD team members possess diverse experience with related tasks than if they are experienced in more unrelated systems or specialized in a single task (Boh et al. 2007). In general, higher familiarity by team members also appears beneficial for ISD team performance, and improved learning is frequently argued to be due to such familiarity (Boh et al. 2007; Huckman et al. 2009).
Despite its achievements, learning curve research on ISD team performance does not provide a conclusive answer to why such learning occurs or which mechanisms produce this complex interplay of experience and performance. Accordingly, there is an absence of theories in this stream of research that might explain such relationships. Only a minority of studies offer hints at concepts that might be useful for explaining these relationships (e.g. Teasley et al. 2002). Regarding research methodology, learning curve research in ISD builds on mostly quantitative analyses of archival data (Teasley et al. 2002; Boh et al. 2007; Kang and Hahn 2009; Huckman et al. 2009), selectively combined and enriched with other instruments like analytical models (Mookerjee and Chiang 2002) or qualitative follow-up interviews (Teasley et al. 2002).
3.2 Shared Knowledge and Group Memory in ISD
A second stream of literature conceptualizes team learning as a step towards task mastery and typically tries to explain learning effects as the “outcome of communication and coordination that builds shared knowledge by team members about their team, task, resources, and context” (Edmondson et al. 2007, p.277). The underlying assumption of this research is that commonly shared knowledge and meta-knowledge indicate that: (a) learning occurs at the team level; and (b) this common ground is explicitly and implicitly used by teams to improve their performance. It is thereby acknowledged that teams consist of individuals among whom knowledge is unevenly distributed and that the dissemination of individual knowledge into the group is central to realizing performance gains. Nevertheless, shared knowledge and group memory research conceptualizes team learning as the result of activities like the dissemination of knowledge rather than the activities themselves (Edmondson et al. 2007; Wilson et al. 2007; Goodman and Dabbish 2011). Socio-cognitive memory structures indicate teams learn from individual experiences. Different structures of group memory have been proposed in order to grasp this concept. The most pronounced one is Wegner (1987)’s Transactive Memory System (TMS). Such group memory structures connect single team members, who possess specialized knowledge, over the shared meta-knowledge of how certain task characteristics match the single individuals’ resources (Wegner 1987; Alavi and Leidner 2001). In other words, team members use each other as a memory source (Oshri et al. 2008). TMSs constituting an antecedent of team performance have especially been frequently examined on a team level in IS and organizational research (Edmondson et al. 2007; Choi et al. 2010).
One central finding of this stream of research is that knowledge and meta-knowledge shared in group memory account for the performance of ISD teams in several dimensions. Scholars find that knowledge shared in group memory improves ISD teams’ effectiveness and efficiency, and enhances their ability to transfer knowledge to others, as well as their ability to integrate external knowledge creatively into software products (Faraj and Sproull 2000; Kottlarsky and Oshri 2005; He et al. 2007; Espinosa et al. 2007; Oshri et al. 2008; Nemanich et al. 2010; Maruping et al. 2009; Lin et al. 2011; Zhang et al. 2011). In accordance with the demands of Alavi and Leidner (2001) for more research into the facilitating conditions of learning and knowledge management, several research endeavors have been undertaken to find the antecedents for the establishment of group memory structures in ISD.
teams. Unsurprisingly, scholars find close and frequent interactions of team members to be one of these antecedents (Levesque et al. 2001; He et al. 2007). However, such close interactions are much harder to achieve in globally distributed software development teams whose members must potentially work across spatial, temporal, and socio-cultural boundaries. This distribution can heavily impact the teams’ abilities to create a group memory system suiting their needs (He et al. 2007; Espinosa et al. 2007, Oshri et al. 2008). While the negative influence of team distribution can be reduced by employing coordination mechanisms of a wide range, from mutual visits over standardized organizational structures to communication technologies, these must be finely tuned as the situational settings influence the mechanisms’ applicability (He et al. 2007; Oshri et al. 2008).
Group memory systems in ISD evolve over time and can grow or shrink (He et al. 2007). One reason is that ISD team members differ in their needs for interaction depending on their roles and tasks. For example, developers perceive different pressure points in team coordination than do ISD managers (Espinosa et al. 2007). Consequently, ISD teams whose members increasingly specialize in a certain role and work on tasks having low interdependency with others tend to have shrinking group memory over time (Levesque et al. 2001). In line with this argument, Vidgen and Wang (2009) propose that more interconnecting practices, multi-skill development, and autonomy in ISD can enhance team learning. However, recent findings by Nemanich et al. (2010) indicate that there are more complex relationships between team knowledge, autonomy, individual developers’ capabilities, and teams’ ability to learn, than previously assumed. Interestingly, these authors find that possession of existing knowledge does not necessarily improve ISD teams’ ability to learn. Quite the contrary, they find that teams with less prior knowledge are forced to learn more rapidly and receive more benefits from doing so (Nemanich et al. 2010). Moreover, mechanisms to control the importance of large bodies of shared knowledge in ISD teams also appear to exist. Maruping et al. (2009), for example, find that collective code ownership reduces the impact of the group memory system on the quality of software development, while established coding standards increase it. Finally, the establishment of a group memory system can also be fostered by appropriate knowledge management systems (Zhang et al. 2011).
Two theoretical lenses underlie the majority of studies on group memory in ISD research: (1) shared cognition based on the concept of shared mental models (Cannon-Bowers et al. 1993); and (2) TMS (Wegner 1987). Only Vidgen and Wang (2009) and Zhang et al. (2011) ground their work in other theories, namely absorptive capacity and complex adaptive systems. From a methodological perspective, two research designs are applied: (1) survey-based quantitative analyses (Faraj and Sproull 2000; Levesque et al. 2001; He et al. 2007; Lin et al. 2011; Maruping et al. 2009; Nemanich et al. 2010; Zhang et al. 2011); and (2) qualitative, interview-based case studies (Kotlarsky and Oshri 2005; Espinosa et al. 2007; Oshri et al. 2008; Vidgen and Wang 2009). While several of these studies acknowledge that there are processes at the team level that are decisive for the development and use of commonly shared knowledge and meta-knowledge, such processes are not captured in any of these studies. In general, this stream of research measures the teams’ state of common knowledge or meta-knowledge, but it does not address the actual activities which lead to changes in such knowledge.
3.3 Team Learning Behavior as a Group Process in ISD
While research on group memory merely acknowledges the existence and importance of team level processes and activities without addressing the same, a behavioral school of team learning research exists which focuses exactly on these aspects. Scholars in this stream of research conceptualize team learning as an ongoing group process of reflection and action (Edmondson 1999), typically including different activities such as information sharing and reflection on expertise (Edmondson et al. 2007). The focus of this stream of research is on teams’ learning behaviors from both a theoretical and methodological perspective.
Team learning scholars have highlighted that, not only is the team knowledge important for team performance, but so is what team members actually do with this knowledge. For example, Walz et al.
(1993) find that software design team members engage in the acquisition, the sharing, and the integration of knowledge into the group. While an overall increase in the level of domain knowledge at a team level might be helpful, the authors also argue that managed conflict within the team stimulates the team’s learning behaviors (Walz et al. 1993). Liang et al. (2010) refine this proposition by demonstrating that the quality of developed software actually increases when team conflict that may be attributable to team members’ differing backgrounds and expertise exists during a task. Such task conflict does not necessarily harm the productive communication within the team, but it stimulates learning behaviors. Notwithstanding, evidence also exists that simply teaming developers with different backgrounds and expertise alone does not necessarily lead to more engagement in learning behaviors or more creative results (Tiwana and Mclean 2005). Further research might be needed to clarify under what conditions ISD teams can benefit from heterogeneous expertise and task conflict in order to improve team performance. Such conflict about how to complete a task requires spare resources for discussions and conflict resolution. Consequently, the relationship between task conflict and team performance is evidently ambiguous. In contrast, other scholars acknowledge that such learning activities might have a positive influence on the team’s performance, but argue that the stronger and more important effect of such learning activities is the resulting increase in individual team members’ satisfaction with work (Janz and Prasarnphanich 2009; Janz and Prasarnphanich 2003). Moreover, focusing on learning behaviors, researchers in this stream provide a possible explanation for the ambiguous findings on ISD team autonomy (Vidgen and Wang 2009, Nemanich et al. 2010) in group memory research – namely that different types of team autonomy might stimulate different learning behaviors and vary in their importance for the overall level of learning (Janz and Prasarnphanich 2009; Li et al. 2009).
Research on team learning behaviors in ISD is not based on a single dominant theoretical lens. It draws from a variety of theories, such as: collaborative learning theory (Janz and Prasarnphanich 2003; Janz and Prasarnphanich 2009); information theory (Liang et al. 2010); and social interdependence theory (Li et al. 2009). Regarding research methodology, this stream heavily builds on quantitative survey-based designs. Yet, as in related disciplines, many scholars exclusively collect research data on an individual level. This can lead to a divergence in the levels of analysis and theory that is, neither always necessary, nor desirable in team learning research (Goodman and Dabbish 2011).
### 3.4 A New Structural Approach to Analyze Team Learning in ISD
Research in team learning behaviors theoretically and empirically examines learning behaviors as a group process, in the sense of uniform activities like reflection and discussion at a team level. More recently, scholars have adopted the perspective that the individual’s role within ISD team learning is more multifaceted than has hitherto been acknowledged (Skerlavaj et al. 2010; Sarker et al. 2011). They conceptualize the team as a network of individuals who interact in different ways and intensities. Team learning consequently consists of interactions between these networked actors in this perspective. To examine these interactions more closely, researchers choose methods and theories that account for both the individual and the team level in their analyses.
For different reasons, some individuals can become more important for overall team learning than others. For example, Sarker et al. (2011) show that there can be “stars” in globally distributed ISD teams who comprise the central institutions for knowledge exchange activities between team members. These stars are highly trusted by the rest of the team and communicate more frequently with more team members. As a result, they can also serve as boundary spanners for different sub-groups within the team. Interestingly, the stars’ own knowledge of technologies or management is not necessarily high (Sarker et al. 2011). Nevertheless, Skerlavaj et al. (2010) show that such central actors in the learning network are often found in senior positions and that the flow of knowledge between single team members is not necessarily reciprocal. Consequently, team members who share more knowledge do not necessarily profit from knowledge returned at an individual level. This might be one possible explanation why research in ISD team learning behaviors as a group process has produced inconsistent
findings about the effects of team level engagement in teaching and assistance to team members (Janz and Prasarnphanich 2003; Janz and Prasarnphanich 2009; Li et al. 2009).
Skerlavaj et al. (2010) and Sarker et al. (2011) follow an innovative approach in researching team learning in ISD by accounting for the structural and relational properties of the ISD teams as groups of interlinked individuals. In accordance with this perspective, they apply Social Network Analysis (SNA) (Wellman et al. 2003) as a central method in their studies of ISD teams. Regarding underlying theories, the scholars draw from a variety of different perspectives to explain the observed phenomena. Despite the low number of studies taking this structural view to date, we argue that these papers represent a new stream of research in ISD team learning. The reason is that they propose a radically new structural conceptualization of the learning team as a network of individuals whose interactions constitute team learning in their entirety. Accordingly, they also show methodological innovation by the application of SNA to the field of ISD team learning. Moreover, by explicitly taking a perspective that models a relationship between team learning and the learning of team members, researchers might possibly be able to overcome one of the most criticized aspects of team learning research, namely the confusion of group learning and individual learning in a group (Goodman and Dabbish 2011). Extending the body of research in this stream appears very promising as it opens up a wide field of explanations for the team level phenomena of learning.
<table>
<thead>
<tr>
<th>Methods / Streams in ISD Team Learning</th>
<th>Total Learning Curve</th>
<th>Group Memory</th>
<th>Group Process</th>
<th>Structural Perspective</th>
</tr>
</thead>
<tbody>
<tr>
<td>survey-based quantitative</td>
<td>7</td>
<td>5</td>
<td></td>
<td></td>
</tr>
<tr>
<td>interview-based qualitative</td>
<td>1</td>
<td>4</td>
<td>1</td>
<td></td>
</tr>
<tr>
<td>archival data for quantitative</td>
<td>4</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>others (simulation, SNA)</td>
<td>1</td>
<td></td>
<td></td>
<td>2</td>
</tr>
<tr>
<td>number of studies included</td>
<td>24</td>
<td>5</td>
<td>11</td>
<td>6</td>
</tr>
</tbody>
</table>
Table 1. ISD team learning studies: data collection and analysis methods (multiple possible)
Table 1 provides an overview of the studies of ISD team learning we found as well as their methodologies. In summary, we find four streams of research on team learning in ISD based on the three categories proposed by Edmondson et al. (2007): (1) learning curve research which treats learning as a black box between situational factors and outcome improvement; (2) group memory research which takes a knowledge-centered perspective on team learning as the creation of a system of common knowledge and meta-knowledge; (3) team learning behavior research which addresses the distinct team level processes of learning as observable group activities; and finally (4) a structural approach which explicitly accounts for the dynamic learning structures and relations between individuals that constitute team learning in their entirety. These schools all make valuable contributions to a growing body of knowledge and expose distinct features which shed light on equally distinct aspects of team learning in ISD. We consequently agree with Edmondson et al. (2007) that a single, unified concept of team learning is not what scholars should strive to pursue. Instead, we argue for pluralism and cross-fertilization of research streams. Fruitful effects may be created by more theoretical and methodological diversity. Learning behavior research, for example, might profit from analysis of archival data which is common in learning curve research. This could complement existing survey-based approaches to reduce the occasional disparity of the levels of theory and analysis. Similarly, structural approaches might take a larger variety of team learning processes into account to gain a more precise picture of the interactions between individuals and their effects on the team as a collective. Despite all calls for diversity, we also advocate more consistent measures and operationalization across research studies as necessary prerequisites for the cumulative progress of ISD team learning research.
4. **Avenues for Future Research**
The state of research we have presented so far offers a number of insights into ISD team learning. Comparing the state of this research to literature in related disciplines, it is evident that ISD research has partially or totally ignored some aspects research in related disciplines. Below, we will highlight areas of research where organization and management science research has created a diverse body of knowledge while ISD team learning research has only superficially examined them. Thereafter, we will address the resulting implications for what we perceive as outstanding ISD management and methodology research given the present state of team learning research in ISD as well as that of related disciplines.
4.1 **Contrasting ISD Team Learning to Neighboring Disciplines**
**Team Leader Behavior:** Team leaders often play a dominant role in ISD projects (Nambisan and Wilemon 2000). Sarin and McDermott (2003) find that a democratic leadership style, initiation of a goal structure by the team leader, and the leader’s position within the organization positively relate to team learning. Team leadership also plays a critical role for team effectiveness and team performance (Edmondson et al. 2007, Edmondson and Nembhard 2009). For instance, team leaders can stimulate team learning by involving the members in the decision-making process and outlining the team goals and expectations (Sarin and McDermott 2003). In line with this argument, Van der Vegt et al. (2010) find that the configuration of power within a team is a key factor important for team learning.
Despite great attention to the role of project managers or team leaders in the success or failure of ISD projects, the link between team leadership and team learning is often missing in ISD team learning research. Among those few researchers who address this link, Vidgen and Wang (2009) note that central task allocation by a project manager without consultation with team members can inhibit learning in ISD. Additionally, Sarker et al. (2011) find formal and emergent leaders to be central for knowledge transfer in globally distributed ISD. Moreover, a change in team leadership style should also influence team learning. A Scrum Master, for instance, is not a traditional team leader. Rather, a Scrum Master’s role is that of a facilitator who solves key issues that impede the team’s success and takes care of interactions and collaborations (Dybå and Dingsøyr 2008). One of the most important roles of a Scrum Master is to conduct retrospectives in order to assess lessons learned. The role of a Scrum Master may consequently influence the team’s learning very differently from traditional leaders. Therefore, ISD researchers should evaluate and assess different leadership behaviors and roles, as well as their influence on group learning in ISD teams.
**Team Learning Goal:** Behavior at work and particularly the learning behavior are each affected by different goals and purposes. Tjosvold et al. (2004) propose that team members may reach different conclusions about how their individual goals are structured, and as a result, adapt their interactions with other team members. Trying to achieve individual goals in a team can conflict with other members’ interests. Tjosvold et al. (2004) conclude that goal setting is likely to affect team learning behavior in terms of interactions, information sharing and supporting other members in group challenges. Using goal orientation theory, Hirst et al. (2009) note that “the relationship between an individual’s goal orientation and creativity is contingent on team learning behavior” (Hirst et al. 2009, p.282).
Prior ISD research emphasizes that an incongruity of goals, for instance, is often found in distributed teams in offshore ISD projects (Levina and Vaast 2008). Team members can have competitive and independent goals that in turn affect team learning. This issue might also be one possible explanation why the influence of knowledge sharing is dependent on the geographical pattern of team member distribution in new product development teams (Staples and Webster 2008). Research on ISD team learning should clarify to what degree learning goals can beneficially influence team learning and performance in different configurations and team structures.
Task Characteristics: There is empirical evidence of the effects of task characteristics on team learning. Edmondson (1999) outlines several task characteristics that she argues affect team learning. She asserts that “highly routine repetitive tasks with little need for improvement or modification” may inhibit team efficiency and performance (Edmondson 1999, p.354). On the other hand, uncertain and risky tasks may raise the need for teams to learn continuously in order to understand the environment, as well as customers’ needs. Uncertain tasks may further require team members to coordinate more effectively. Wong (2004) measure task routineness by the frequency of unexpected and novel events that occur during the accomplishment of a task. Research in this direction has created measures and operationalizations of task characteristics in areas related to ISD research, such as new product development (Gino et al. 2010). However, only a few ISD team learning scholars (see Huckman et al.2009) have attempted to develop such instruments for software development tasks in order to set the characteristics of ISD tasks in relation to team learning and performance. We argue that doing so is a worthwhile endeavor, since: not all tasks in ISD are non-routine and not all of them demand creativity which might require different levels of team learning for different types of tasks.
4.2 Implications for ISD Management
One of the most prominent fields of research in ISD management is concerned with globally distributed software development projects. An enormous challenge in such a setting is the management of culturally diverse ISD team members. Research is continuously trying to discover and explain effective management practices to address it (Levina and Vaast 2008; Gregory 2010). Team learning scholars have contributed to this endeavor by finding mechanisms which stimulate the creation of a common group memory (Kotlarsky and Oshri 2005; Kanawattanachai and Yoo 2007, Oshri et al. 2008). They have also highlighted the existence of different situational prerequisites to and inhibitors of learning activities across the global team, such as trust, psychological safety, and aspects of collocation (Staples and Webster 2008; Van der Vegt et al. 2010; Choo 2011). Notably, team learning scholars have recently argued that globally distributed team members might actually never be able to develop a real shared mental model because of their different backgrounds. Instead, creating cross-understanding is proposed as a better solution (Huber and Lewis 2010). The implication for global ISD management is that team members should be brought into a position to understand each others’ various values and manner of thinking rather than striving to create one single “negotiated culture” (Gregory 2010, p.6). Future research should investigate which underlying team processes can be stimulated in order to create such cross-understanding.
Team learning research also provides an explanation why personnel turnover is an important cost driver in offshore ISD (Dibbern et al. 2008). When single members leave the team, the existing group memory is negatively impacted, causing a decrease in team performance on occasions when such loss is not successfully accounted for by management (Lewis et al. 2007). Since not all individuals are equally central to team learning activities and knowledge flows (Skerlavaj et al. 2010; Sarker et al. 2011), the loss of a single developer can potentially corrupt the entire memory system within an ISD team. As such, future research should investigate how actors who are pivotal to the team’s learning activities can be identified so that timely precautions are taken to address their central role within the team with special care.
4.3 Implications for ISD Methodology
With regard to ISD methodology, team learning research makes several contributions and depicts potential areas for future investigations. First, the findings on team autonomy, which is a central aspect in agile development methods such as extreme programming, are not consistent (Janz and Prasarnphanich 2009; Vidgen and Wang 2009; Nemanich et al. 2010). They indicate that different types of team autonomy can stimulate different team behaviors, but not all of them improve performance.
Research might address the question of what kind of autonomy should be given to ISD teams in different contexts in order to simultaneously stimulate learning and increase performance. Next, increased development of multiple skills by team members and reduced specialization, which are found in lean software development approaches (see Dybå and Dingsøyr 2008; Poppendieck and Poppendieck 2003), influence team learning behavior by reducing the need for awareness of expertise location (Vidgen and Wang 2009; Maruping et al. 2009). However, this does not necessarily lead to higher team creativity or performance as developing similar skills reduces the heterogeneity of expertise and potentially fruitful task conflicts (Tiwana and Mclean 2005; Liang et al. 2010). Under certain conditions, hierarchical team structures with specialist roles can actually foster team learning (Bunderson and Boumgarden 2010). Future research should, therefore, investigate in which cases in ISD an agile team of generalists can perform better and in which cases a hierarchical team structure with several specialists might be superior. Finally, at least some development practices manipulate the relative influence of team learning on software quality in ISD (Maruping et al. 2009). However, which specific developer behaviors are triggered by such practices remains obscure so far. Revealing these behaviors and their underlying mechanisms would constitute a significant step in understanding the relationship between development methodology, team learning, and team performance (Maruping et al. 2009).
5 Conclusion
To the best of our knowledge, we are the first to conduct a literature review of scholarly research on team learning with a focus on ISD. Unlike existing reviews, we thereby account for findings on the specifics of ISD as a complex organizational function. This is in line with recent calls for more domain-specific theories of team learning (Edmondson et al. 2007). Based on the categorization scheme of Edmondson et al. (2007), we examine three perspectives on team learning applied to the field of ISD and highlight their distinct characteristics, assumptions, and limitations: (1) the team learning curve, (2) shared knowledge and group memory, and (3) team learning behavior. In addition to these streams, we identify an innovative approach to research team learning in ISD which takes a structural and relational perspective. We present several aspects which these streams of research can cross-fertilize. We emphasize team leader behavior, learning goals and task characteristics as concepts which ISD team learning research has widely neglected by contrasting team learning in ISD to related disciplines. We also highlight several implications for ISD methodology and management, especially in globally distributed settings and agile development practices. In summary, we hope to contribute to the progress of our field in understanding, explaining, and improving team learning in ISD.
References
|
{"Source-Url": "https://pdfs.semanticscholar.org/6557/5a88bb70e4f13703967fae83ffbc7d050bce.pdf", "len_cl100k_base": 8373, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 33378, "total-output-tokens": 12537, "length": "2e13", "weborganizer": {"__label__adult": 0.0007500648498535156, "__label__art_design": 0.0010128021240234375, "__label__crime_law": 0.0005707740783691406, "__label__education_jobs": 0.09124755859375, "__label__entertainment": 0.00018537044525146484, "__label__fashion_beauty": 0.0003814697265625, "__label__finance_business": 0.002529144287109375, "__label__food_dining": 0.0008978843688964844, "__label__games": 0.0009784698486328125, "__label__hardware": 0.0008349418640136719, "__label__health": 0.001010894775390625, "__label__history": 0.0006113052368164062, "__label__home_hobbies": 0.0002884864807128906, "__label__industrial": 0.0010166168212890625, "__label__literature": 0.0012302398681640625, "__label__politics": 0.0005726814270019531, "__label__religion": 0.0007510185241699219, "__label__science_tech": 0.02734375, "__label__social_life": 0.0006127357482910156, "__label__software": 0.00919342041015625, "__label__software_dev": 0.85546875, "__label__sports_fitness": 0.0005888938903808594, "__label__transportation": 0.0012331008911132812, "__label__travel": 0.0004911422729492188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51863, 0.04426]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51863, 0.44201]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51863, 0.91288]], "google_gemma-3-12b-it_contains_pii": [[0, 586, false], [586, 2101, null], [2101, 6751, null], [6751, 11407, null], [11407, 16025, null], [16025, 20631, null], [20631, 25363, null], [25363, 29958, null], [29958, 34297, null], [34297, 38602, null], [38602, 43027, null], [43027, 47801, null], [47801, 51863, null]], "google_gemma-3-12b-it_is_public_document": [[0, 586, true], [586, 2101, null], [2101, 6751, null], [6751, 11407, null], [11407, 16025, null], [16025, 20631, null], [20631, 25363, null], [25363, 29958, null], [29958, 34297, null], [34297, 38602, null], [38602, 43027, null], [43027, 47801, null], [47801, 51863, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51863, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51863, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51863, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51863, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51863, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51863, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51863, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51863, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51863, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51863, null]], "pdf_page_numbers": [[0, 586, 1], [586, 2101, 2], [2101, 6751, 3], [6751, 11407, 4], [11407, 16025, 5], [16025, 20631, 6], [20631, 25363, 7], [25363, 29958, 8], [29958, 34297, 9], [34297, 38602, 10], [38602, 43027, 11], [43027, 47801, 12], [47801, 51863, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51863, 0.05738]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
4d3e658316569d3fbba7381e0c2966eef96e1e61
|
Can Testedness be Effectively Measured?
Iftekhar Ahmed
Oregon State University, USA
ahmedi@oregonstate.edu
Rahul Gopinath
Oregon State University, USA
gopinatr@oregonstate.edu
Caius Brindescu
Oregon State University, USA
brindesc@oregonstate.edu
Alex Groce
Oregon State University, USA
agroce@gmail.com
Carlos Jensen
Oregon State University, USA
cjensen@oregonstate.edu
ABSTRACT
Among the major questions that a practicing tester faces are deciding where to focus additional testing effort, and deciding when to stop testing. Test the least-tested code, and stop when all code is well-tested, is a reasonable answer. Many measures of “testedness” have been proposed; unfortunately, we do not know whether these are truly effective.
In this paper we propose a novel evaluation of two of the most important and widely-used measures of test suite quality. The first measure is statement coverage, the simplest and best-known code coverage measure. The second measure is mutation score, a supposedly more powerful, though expensive, measure.
We evaluate these measures using the actual criteria of interest: if a program element is (by these measures) well tested at a given point in time, it should require fewer future bug-fixes than a “poorly tested” element. If not, then it seems likely that we are not effectively measuring testedness. Using a large number of open source Java programs from Github and Apache, we show that both statement coverage and mutation score have only a weak negative correlation with bug-fixes. Despite the lack of strong correlation, there are statistically and practically significant differences between program elements for various binary criteria. Program elements (other than classes) covered by any test case see about half as many bug-fixes as those not covered, and a similar line can be drawn for mutation score thresholds. Our results have important implications for both software engineering practice and research evaluation.
CCS Concepts
• Software and its engineering → Software testing and debugging; Empirical software validation;
Keywords
test suite evaluation, coverage criteria, mutation testing, statistical analysis
1. INTRODUCTION
The quality of software artifacts is one of the key concerns for software practitioners and is typically measured through effective testing. While it is widely held that “you cannot test quality into a product,” you can use testing to detect that the Software Under Test (SUT) has faults, and to estimate its likely overall quality. Moreover, while testing itself does not produce quality, it leads to the discovery of faults. When these faults are corrected, software quality improves.
Testing software poses questions. First, how much testing is needed? Has “enough” testing been done? Second, where should future test efforts be applied in a partially tested program? The typical approach to answering these questions is to measure the quality of the test suite, not the SUT. Numerous measures, primarily focused on code coverage [20–22] have been proposed, and numerous organizations set testing requirements in terms of coverage levels [37]. Both code coverage and mutation score measure the “testedness” of an SUT, using the dynamic results of executing a test suite. However, it is not established that using such testing requirements, or suite quality measures in general, translates into an effective practice for producing better software.
While test driven development, in particular, has pushed testing to new prominence, practicing software programmers often balk at having to satisfy what they see as arbitrary coverage requirements. Some go so far as to suggest that “code coverage is a useless target measure”[2]. Some studies seem to support this conclusion, at least in part[25]. The utility of testing itself has even come under attack[3].
Our aim in this paper is to place the evaluation of test suites (and thus decision-making in testing) on a firmer footing, using a measure that translates directly into practical terms. There is, of course, no end to studies measuring the effectiveness of test suite evaluation techniques. However, these studies tend to either cover only a few subject programs or faults, not use real faults, not measure what developers directly care about, or assume the validity of mutation testing, which is itself a relatively unproven method.
for evaluating a test suite. The methodology of such studies also often involves using tool-generated or randomly-chosen subset-based test suites to measure correlation. Such suites may not resemble real-world test suites, and thus be of little relevance to most developer practice.
We propose a simpler and more direct method of evaluation that eliminates some of the concerns mentioned above. It can be argued that a correlation between measure of suite quality and fault detection is meaningless if the faults detected are unimportant, trivial, or artificial. As a recent paper evaluating the ability of automated unit test generators to detect real faults phrased it “Just because a test suite...[is effective] does not necessarily mean that it will find the faults that matter” [38]. It is usually hard to argue that a bug that has been discovered and fixed did not matter: fixing bugs is difficult and resource-intensive, requiring developers (and possibly testers) to devote time to implementing a correction (and, hopefully, validating it). In many cases, the bug was detected because it caused problems for users. There is evidence that problems in code, such as those identified by code smells, that do not have significant immediate consequences are often never corrected, even at the price of design degradation [1]. Bug-fixes therefore usually indicate important defects: in general, developers thought these were the problems worth addressing. The most practical goal of testing therefore, is to prevent future bug-fixes, by detecting faults before release, avoiding impact on users, and usually lowering development cost.
We should then evaluate test suite quality measures by a simple process: does a higher measure of testedness for a program element (in our work, a statement, block, method or class) correlate with a smaller number of future bug-fixes? By avoiding whole-project measures and focusing on individual program elements, we avoid the confounding effects of test suite size [25]. While a large test suite can produce higher coverage and detect more faults, even if coverage and fault detection are not themselves directly related, it cannot (at least in any way we can imagine) cause statements that are covered to have fewer faults than those that are not covered, unless coverage itself is meaningful. Similarly, using individual program entities as the basis of analysis mitigates some of the possible effects of, e.g., test suites with good assertions also having better coverage. This can cause a test suite with high coverage to perform better than one without high coverage, on average, but it cannot, we claim, plausibly result in covered entities having fewer bug-fixes than ones not covered, unless coverage itself matters.
The core argument for our analysis is as follows: if a particular program element is fully tested to conform to its specification, then that program element should have no bug-fixes applied (until the element ceases to exist as a result of a non-bug-fix modification, e.g. adding new functionality potentially invalidating the old specification). Similarly, a program element that is not tested at all should on average have a higher chance of seeing future bug-fixes applied for the simple reason that a fault had no chance of being caught through testing. This, on its own, should result in a strong negative correlation between testedness and future bug-fixes, if our fundamental assumption about the utility of testing is correct and our measure of testedness is effective. In general, a well-tested program element should require fewer bug fixes than a less well-tested program element. In this paper we assume (and later, based on empirical data, argue) that testing itself is beneficial. We therefore primarily aim to evaluate the measurement of test suite quality/testedness. We focus on two important, widely studied, measures of suite quality. First, statement coverage is the simplest, most widely used, and easiest to understand coverage measure, and has some support as an effective measure of test suite quality in recent work [21]. Second, mutation score is not only commonly advocated as the best method for evaluating suite quality, but is essential to most other studies of coverage method effectiveness [22, 27].
We evaluate these test suite evaluation methods empirically using a large representative set of real-world programs, real-world test suites, and bug-fixes, and find that while there is a small (but statistically significant) negative correlation between our testedness measures and future bug-fixes for program elements, the effect is so small as to be practically insignificant. There is very little useful continuous relationship between measures of testedness and actual tendency to not have bugs detected and fixed, and while it is reasonable to bet that a more-tested element will have fewer faults, the size of the effect is very small.
However, we do find that there is a consistent and practically (as well as statistically) significant difference in the mean number of bug-fixes for code, if we apply selected binary measures of “well-testedness” based on coverage or mutation score. For example, program elements with at least a 75% mutation score see, on average, only about half as many future bug-fixes (normalized7) as program elements with a lower mutation score.
One intuitively appealing explanation for the low correlation of testedness to bug-fixes is that, even if “poorly” tested, unimportant pieces of code are likely to see few future bug fixes. If very few users execute a program element, or if its effects have very limited impact, then the bug will likely not be fixed (even if reported). However, the problem of varying program element importance is unlikely to be the root cause for the lack of a useful continuous correlation for suite evaluation measures. If it were, we would expect the effects of importance to also prevent binary testedness criteria based on mere coverage from predicting future bug-fixes (since no one will bother to test program elements that are unlikely to ever exhibit important bugs). However, like program elements with < 75% mutation score, program elements that are not covered are also likely to see nearly twice as many future bug-fixes.
Nonetheless, perhaps a suite quality measure should reflect the importance of program elements. However, forcing developers to annotate code by its importance is impractical; we need a static measure of importance. One approach is to say that complex elements are more likely to be important, since developing complex code with many operators and conditionals, but low importance, is an unwise use of development resources. In this case, in addition to its other advantages, mutation testing may help take importance of code into account, in that complex program elements produce more mutants than simple elements (e.g., a simple logging statement will seldom perform any calculations, and
---
4By normalized bug-fixes, we mean bug-fixes per statement/line for elements larger than a single statement or line; unless we indicate otherwise, we always normalize bug-fixes.
5We only demonstrate this result for statements and methods; there were too few classes that were not covered by any tests in our data to show a significant relationship.
so often only produce a single statement-deletion mutant. We therefore also measure whether the number of mutants (as a measure of code complexity) predicts the number of bug-fixes applied to a program element, and whether the number of mutants predicts the mutation score for an element. Both effects are significant but small. Surprisingly, more complex code sees slightly fewer bug-fixes than simple code. As might be expected if complexity is associated with importance, more complex code is also slightly more tested, according to mutation score. Both effects are too weak to be of much practical value, however.
Our findings with respect to correlation of testedness measures and bug-fixes are potentially troubling for the research community. Software testing researchers often use a difference of a few percentage points in mutation score or a coverage measure as a means to assert that one test generation or selection technique is superior to another. However, our data shows that relying on a few percentage points is dangerous, as such small differences may not indicate real impact in terms of defects that are worth fixing. On the other hand, our data seems to support the types of “arbitrary” adequacy criteria often imposed by managers or governments (if not the precise values used). Indeed, our data suggests that while a continuous ranking of testedness for program elements is not currently possible, using various empirically-validated “strata” of testedness (not covered, covered but with poor mutation score, covered with high mutation score) may provide a simple, practical way to direct testing efforts.
The contributions of this paper are:
- A novel approach to examining the utility of test suite quality measures that is based on direct practical consequences of testing.
- Analysis of relationships between bug-fixes, test suite quality (testedness) measures, and code complexity and importance metrics for 49 sampled projects from Github and Apache.
- Evidence that there is small negative correlation between the number of mutants (normalized) and the number of future bug-fixes to a program element, indicating that complexity alone does not predict importance well; in fact, more complex program elements seem to be changed less often than simple ones. However, this may partly be due to the fact that more complex elements are also somewhat more well-tested (in terms of mutation score).
- Evidence that the negative correlation between testedness (by our measures) and number of future normalized bug-fixes is statistically significant, but far too small to have much practical impact.
- Evidence that well-chosen adequacy criteria (e.g., is the mutation score above 75%) can be used to predict future normalized bug-fixes in a practical way (leading to differences of a factor of two in expected future bugs), and can serve to distinguish untested, poorly tested, and well-tested elements of an SUT.
Our data is available for inspection and further analysis at http://eecs.osuosl.org/rahul/fse2016/.
2. RELATED WORK
Ours is not the first study to attempt to evaluate measures of testedness [22]. Researchers have often attempted to prove that mutation score is well correlated with real world faults. DeMillo et al. [13] empirically investigated the representativeness of mutations as proxies for real faults. They examined the 296 errors in TgX and found that 21% were simple faults, while the rest were complex errors. Daran et al. [11] investigated the representativeness of mutations empirically using error traces. They studied the 12 real faults found in a program developed by a student, and 24 first-order mutants. They found that 85% of the mutants were similar to real faults.
Another important study by Andrews et al. [2], investigated the ease of detecting a fault (both real faults and hand seeded faults), and compared it to the ease of detecting faults introduced by mutation operators. The ease was calculated as the percentage of test cases that killed each mutant. Their conclusion was that the ease of detection of mutants was similar to that of real faults. However, they based this conclusion on the result from a single program, which makes it unconvincing. Further, their entire test set was eight C programs, which makes the statistical inference drawn liable to type I errors. We also note that the programs and seeded faults were originally from Hutchins et al. [24] who chose programs that were subject to certain specifications of understandability, and the seeded faults were selected such that they were neither too easy nor too difficult to detect. In fact, the study eliminated 168 faults for being either too easy or too hard to detect, ending up with just 130 faults. This is clearly not an unbiased selection and cannot really tell us anything about the ease of detection of hand seeded faults in general (because the criteria of selection itself is confounding). A follow up study [3] using a large number of test suites from a single program, space.c, found that the mutation detection ratio and fault detection ratio are related linearly, with similar results for other coverage criteria (0.83 to 0.9). Linear regression on the mutation kill and fault detection ratios showed a high correlation (0.9).
The problems with some of these studies were highlighted in the work of Namin et al. [34] who used the same set of C programs as Andrews et al. [2], but combined them with analysis of four more Java classes from the JDK. This study used a different set of mutation operators and fault seeding by student programmers for the Java programs. Their analysis concluded that we have to be careful when using mutation analysis as a stand-in for real faults. The programming language, the kind of mutation operators used, and even the test suite size all have an impact on the relation between mutations introduced by mutation analysis and real faults. In fact, using a different mutation operator set, they found that there is only a weak correlation between real faults and mutations. However, their study was constrained by the paucity of real faults, which were only available for a single C program (also used in Andrews et al. [2]). Thus, they were unable to judge the ease of detection of real faults in Java programs. Moreover, the students who seeded the faults had knowledge of mutation analysis which may have biased the seeded faults (thus resulting in high correlation between seeded faults and mutants). Finally, the manually seeded faults in C programs, originally introduced by Hutchins et al. [24], were again confounded by a selection criteria which eliminated the majority of faults as
being either too easy or too hard to detect. Just et al. [27], using 357 real faults from 5 projects, showed that 1) adding more fault-detecting tests to a test suite led to the mutation score increasing more often (73%) than either branch (50%) or statement coverage (30%) and 2) mutation score was more positively correlated with fault detection than either of the other measures. Multiple studies provide evidence that mutation analysis subsumes different coverage measures [8, 30, 35], and it is on this basis that mutation score is often regarded as the “gold standard” for test suite quality measures.
Besides mutation score, the other metric that is commonly used to measure the adequacy of testing is code coverage, that is, a measure of the set of program elements or code paths that are executed by a set of tests. A large body of work considers the relationship between coverage criteria and fault detection — however, the analysis is often “mediated” by assuming the validity of mutation analysis (this is why we discussed mutation above, before turning to code coverage). Mockus et al. [32] found that increased coverage leads to a reduction in the number of post-release defects but increases the amount of test effort. Gligoric et al. [19, 20] used the same statistical approach as our paper, measuring both Kendall τ and \( R^2 \) to examine correlations, for realistically non-adequate suites. Gligoric et al. found that branch coverage does the best job, overall, of predicting the best suite for a given SUT, but that acyclic intra-procedural path coverage is highly competitive and may better address the issue of ties, which is important in their research/method comparison context. Inozemtseva et al. [25] investigated the relationship of various coverage measures and mutation score for different random subsets of test suites. They found that when test suite size is controlled for, only low to moderate correlation is present between coverage and effectiveness, for all coverage measures used. Frankl and Weiss [16] compared of branch coverage and def-use coverage, showing that def-use was more effective for fault detection and there is stronger correlation to fault detection for def-use. Namin and Andrews [33] showed that fault detection ratio (non-linearly) correlated well with block coverage, decision coverage, and two different data-flow criteria. Their research suggested that test suite size was a significant factor in the model. Wei et al. [44] examined branch coverage as a quality measure for suites for 14 Eiffel classes, showing that for randomly generated suites, branch coverage behavior was consistent across many runs, while fault detection varied widely. In their experiments, early in random testing, when branch coverage rose rapidly, current branch coverage had high correlation to fault detection, but branch coverage eventually saturated while fault detection continued to increase; the correlation at this point became very weak.
Gupta et al. [23] compared the effectiveness and efficiency of block coverage, branch coverage, and condition coverage, with mutation kill of adequate test suites as their evaluation metric. They found that branch coverage adequacy was more effective (killed more mutants) than block coverage in all cases, and condition coverage was better than branch coverage for methods having composite conditional statements. The reverse, however, was true when considering the efficiency of suites (average number of test cases required to detect a fault). Li et al. [29] compared four different criteria (mutation, edge pair, all uses, and prime path), and showed that mutation adequate testing was able to detect the most hand seeded faults (85%), while other criteria performed similarly to each other (in the range of 65% detection). Similarly, mutation coverage required the fewest test cases to satisfy the adequacy criteria, while prime path coverage required the most. Therefore, while there are no compellingly large-scale studies of many SUTs selected in a non-biased way to support the effectiveness of mutation testing, it is at least highly plausible as a better standard than other criteria.
Cai et al. [9] investigated correlations between coverage criteria under different testing profiles: whole test set, functional test, random test, normal test, and exceptional test. They investigated block coverage, decision coverage, C-use and P-use criteria. Curiously, they found that the relationship between block coverage and mutant kills was not always positive. Block coverage and mutant kills had a correlation of \( R^2 = 0.781 \) when considering the whole test suite, but as low as 0.045 for normal testing and as high as 0.944 for exceptional testing. The correlation between decision coverage and mutation kills was higher than statement coverage, for the whole test suite (0.832), ranging from normal test (0.368) to exceptional test (0.952). Frankl et al. [17] compared the effectiveness of mutation testing with all-uses coverage, and found that at the highest coverage levels, mutation testing was more effective. Kakarla [28] and Inozemtseva [26] demonstrated a linear relationship between mutation detection ratio and coverage for individual programs. Inozemtseva’s study used machine learning techniques to come up with a regression relation, and found that effectiveness is dependent on the number of methods in a test suite, with a correlation coefficient in the range 0.81 \( \leq \tau \leq 0.93 \). The study also found a moderate to high correlation, in the range 0.61 \( \leq \tau \leq 0.81 \), between effectiveness and block coverage when test suite size was ignored, which reduced when test suite size was accounted for. Kakarla found that statement coverage was correlated to mutation coverage in the range of 0.73 \( \leq \tau \leq 0.99 \) and 0.57 \( \leq \tau \leq 0.94 \). Gopinath et al. [21] found that statement, out of branch, and path coverages, best correlated with mutation score, and hence may best predict defect density, in a study that compared suites and mutation scores across projects, rather than using multiple suites for the same project.
The study by Tengeri et al. [42] provided a simple (essentially non-statistical) assessment of how statement coverage, mutation score, and reducibility predicted project defect densities for four open source projects, using a limited set of mutation operators.
None of these studies, to our knowledge, adopted the method used in this paper, where rather than investigate faults and their detection, we look at whether being “well tested” has predictive power with respect to future defects.\(^6\) Most also consider a smaller, less representative (at least of open source projects) set of programs, and the majority are based on programs chosen opportunistically, rather than by our more principled sampling approach. The programs used are often small but well-studied benchmarks such as the Siemens/SIR [41] suite, partly for purposes of comparison to earlier papers, and partly due to the lack of easily available realistic projects with test suites and defects, before the ad-
\(^6\)It is remotely possible that Tengeri et al. [42] are using a similar method, but this is not clear from their description, and the reasoning behind our approach is not elaborated in their work.
vent of very large open source repositories. Unfortunately, as noted by Arcuri and Briand, not at least attempting to randomize selection of programs to study can greatly reduce the generalizability of results [6].
3. METHODOLOGY
Our goal was to evaluate various approaches to assessing the testedness of a program element, using future bug-fixes.
3.1 Collecting the Subjects
For our empirical evaluation, we tried to ensure that the programs chosen offered a reasonably unbiased representation of modern software. We also tried to reduce the number of variables that can contribute to random noise during evaluation. With these goals in mind, we chose a sample of Java projects from Github [18] and the Apache Software Foundation [4]. All projects selected used the popular maven [5] build system. We randomly selected 1,800 projects. From these, we eliminated aggregate projects that were difficult to analyze, leaving 1,321 projects, of which only 796 had test suites. Out of these, 326 remained after eliminating projects that did not compile (for reasons such as unavailable dependencies, or compilation errors due to syntax or bad configurations). Next, the projects that did not pass their own test suites were eliminated as mutation analysis requires a passing test suite. Finally, we eliminated projects our AST walker could not handle. This resulted in 49 projects selected. The distribution of project size vs. test suite size, and the corresponding mutation score is given in Figure 1.
3.2 Mutant Generation
In the next phase of our analysis, we used PIT [10] for our mutation analysis. PIT has been used in multiple previous studies [12,21,25,40]. We extended PIT to provide the full matrix of test failures over mutants and tests. Mutants can basically be divided into three groups based on their runtime behavior: not covered, killed, and live mutants. We used this basic categorization in our analysis.
3.2.1 Computing Complexity
In order to evaluate the effect of complexity on testedness, it is necessary to find a reasonable measure for complexity. While previous research has used cyclomatic complexity [31] as a measure of complexity, the measure can not be used for single assignment statements. Further, it was found that cyclomatic complexity was strongly correlated with the size of code [39] and provided little extra information. We argue that a better measure of complexity is the average number of mutants generated from each statement. When a piece of code is highly complex, we expect to see a larger number of mutants compared to simpler code.
3.3 Tracking Program Elements
We started our investigation from an arbitrarily determined recent, but not too recent, point in time deemed the “epoch” — December 1, 2014. This was done to provide a point from which testedness (mutation score and statement coverage) could be calculated, and with respect to which bug-fixes could be considered to be “in the future”. For the source code and test suite at epoch, we computed mutation score and statement coverage for each statement, block, method, and class in each project.
In order to determine when a program element (statement, block, method, or class) was changed, and track its history, we used the GumTree Differencing Algorithm [14]. For each element of interest, we considered it changed if the corresponding AST node was changed, or had any children that were added, deleted or modified. The algorithm maps the correspondence between nodes in two different trees, which allowed us to accurately track the history of the program elements.
Using AST differencing gives us three advantages over simple line based differencing. The first is that the algorithm ignores any whitespace changes. Second, we are able to track a node even if its position in the file changes (e.g. because lines have been added or deleted before our node of interest). Third, we are able to track nodes across refactorings, as long as the node stays in the same file. For example, we can track a node that has been moved because of an extract method refactoring.
When considering which statements to track, we used the version of the source code at epoch to determine which AST node resided at that particular line. We filtered only the commits that touch the file of interest. We then tracked that AST node forward in time, taking note of the commits that changed that particular node. For Java, it is possible for multiple statements to be in the same line (for example, a local variable declaration statement inside an if statement). In this case, we considered the innermost statement, as this gives the most precise results.
The epoch is (and can be) arbitrary. Our basic assumption is that test coverage increases monotonically (people do not remove tests very often, and tests don’t lose coverage). We checked this assumption for 5 random projects, at 1-5 random points (depending on history length) before epoch, and confirmed: once covered, always covered, in every case.
3.4 Classifying Commits
In order to answer our research questions, we needed to categorize the code commits. For each program element, we computed the number of commits that touched that element starting from the epoch. For our purpose, code commits can be broadly grouped into one of two categories: (1) bug-fixes and improvements (modifying existing code), and (2) Other — commits that introduced new features or functionality (adding new code) or commits that were related to documentation, test code, or other concerns. Two key problems are that it is not always trivial to determine which cate-
In order to build a classifier for bug-fixing commits, we randomly sampled commits and manually labeled fix-inducing commits. Some keywords indicating bug-fixes were Fix, Bug, and Resolves, along with their derivatives. We should mention that not all bug-fixing commit messages include the words bug or fix; indeed, commit messages are written by the initial contributor of a patch, and there are few guidelines as to their contents. A similar observation was made by Bird et al. [7], who performed an empirical study showing that bias could be introduced due to missing linkages between commits and bugs. Improvements were manually identified based on the following keywords: Cleanup, Optimize, and Simplify or their derivatives. Commits were placed into the Other category if they had the keywords Add or Introduce. The number of lines modified was also compared with the lines added. Those commits with more lines added than modified were considered more likely to be associated with new features and were placed in the Other category. Anything that did not fit into this pattern was also marked as Other. We manually classified a set of 1,500 commits.
### 3.4.1 Manual Classification of Fix-inducing Changes
In order to build a classifier for bug-fixing commits, we randomly sampled commits and manually labeled fix-inducing commits. Some keywords indicating bug-fixes were Fix, Bug, and Resolves, along with their derivatives. Commits were placed into the Other category if they had the keywords Add or Introduce. The number of lines modified was also compared with the lines added. Those commits with more lines added than modified were considered more likely to be associated with new features and were placed in the Other category. Anything that did not fit into this pattern was also marked as Other. We manually classified a set of 1,500 commits.
### 3.4.2 Training the Commit Classifier
We used the set of manually classified commits as the training data for the machine learning classifiers. We evaluated how the degree of adequacy in both mutation score and statement coverage in increasing lexical scope for each statement (except for statement coverage), block, method, and class.
#### 4.1 Correlation Results
We answer this question in increasing scope from statement, smallest block, method, and then class. In each scope, we evaluate how the degree of adequacy in both mutation score and statement coverage affects the total number of bug-fix commits.
### 4. ANALYSIS
We analyze the impact of testedness on program element bug-fixes using both mutation score and statement coverage in increasing lexical scope for each statement (except for statement coverage), block, method, and class.
4.1.1 Mutation Score ($\mu$)
The correlation between number of bug-fixes per statement and mutation score is given in Table 2.
For statements and methods, there is a statistically significant small negative linear correlation between number of bug-fixes per statement and mutation score. A similar effect is observed with Kendall $\tau_b$ correlation, where a small but statistically significant negative correlation is observed for statements, blocks, methods, and classes.
The plot of mutation score vs. normalized bug-fixes for statements is given in Figure 2, for blocks in Figure 3, for methods in Figure 4, and for classes in Figure 5.
4.1.2 Statement Coverage ($\lambda$)
The correlation between number of bug-fixes per statement and statement coverage is given in Table 3.
For statements, blocks, and methods, there was a small but statistically significant negative linear correlation between coverage and bug-fixing commits. Surprisingly, the classes had a weak but significant positive correlation. A similar small but statistically significant negative correlation is also observed using Kendall $\tau_b$, where even classes showed a small negative (but statistically insignificant) correlation. These correlations (for mutant score and for statement coverage) are much lower than those seen in recent studies showing good correlation between coverage metrics and mutation scores [19, 21] (these studies are measuring a different property, but in some sense aiming for similarly strong correlations). These correlations are not so small as to be completely devoid of value, but they do make the use of these measures dubious when comparing program elements or test suites with only small testedness differences. Unfortunately, this is a common practice in the evaluation of software testing experiments. Worse still, these results might be thought to suggest that testedness cannot be effectively measured, leaving the practicing tester without useful guidance.
4.2 Binary Testedness: Is It Covered?
However, using testedness as a continuous valuation, where we expect slightly more tested program elements to have fewer bug-fixes, is not the only way to make use of testedness. Instead of trying to separate very similarly tested elements, we could simply draw a line between tested and not-tested program elements. For example, common sense suggests that if testing is useful at all, then code that is not covered should probably have more bug-fixes that code that has at least some test covering it. This rationale is the intuition behind ideas like “getting to 80% code coverage,” though it does not justify any particular target value.
Code that isn’t executed in tests is surely less tested than code that executes in even very poor tests (since even very badly designed tests with a weak oracle may catch crashes, uncaught exceptions, and infinite loops, for example).
We compared the mean number of bug-fixes for covered vs. uncovered program elements using a t-test. The results are shown in Table 4. By covered element we mean a program element which has at least a single statement exercised by some test case. While this is a reasonable binary distinction up the method level, a class with only a single statement covered may not be much more tested than a class that does not have any statements covered. This may account for the difference seen for classes in Table 4. We also note that there is insufficient data for statistical significance in classes (most classes are covered by at least some test).
| Table 2: Correlation between total number of bug-fixes per statement and mutation score |
|---------------------------------|---------------|---------------|----------------|----------------|
| | (a) $R^2$ | (b) Kendall $\tau_b$ |
| Statements | Mean Low High | p | Mean Low High | p |
| Statements | -0.12 -0.13 -0.11 | 0.00 | -0.13 0.00 |
| Blocks | -0.14 -0.15 -0.12 | 0.00 | -0.19 0.00 |
| Methods | -0.16 -0.18 -0.14 | 0.00 | -0.14 0.00 |
| Classes | -0.13 -0.18 -0.08 | 0.00 | -0.10 0.00 |
| Table 3: Correlation between total number of bug-fixes per statement and statement coverage |
|---------------------------------|---------------|---------------|----------------|----------------|
| | (a) $R^2$ | (b) Kendall $\tau_b$ |
| Statements | Mean Low High | p | Mean Low High | p |
| Statements | -0.11 -0.12 -0.10 | 0.00 | -0.13 0.00 |
| Blocks | -0.13 -0.14 -0.12 | 0.00 | -0.21 0.00 |
| Methods | -0.14 -0.16 -0.12 | 0.00 | -0.13 0.00 |
| Classes | 0.09 0.04 0.13 | 0.00 | -0.04 0.07 |
| Table 4: Difference in bug-fixes between covered and uncovered program elements |
|---------------------------------|----------------|----------------|
| | Covered | Uncovered | p |
| Statement | 0.68 | 1.20 | 0.00 |
| Block | 0.42 | 0.83 | 0.00 |
| Method | 0.40 | 0.87 | 0.00 |
| Class | 0.45 | 0.32 | 0.10 |
4.3 Binary Testedness: Mutation Score and Coverage Thresholds
While measuring testedness based on mutation score or statement coverage as a continuous value of limited value, we can do much better than just drawing a meaningful dividing line between covered and not-covered program elements.
We can instead evaluate whether the mean number of bug-fixes differs significantly when the tests reach a given adequacy threshold. Table 5 and Table 6 tabulate the mean number of normalized bug-fix commits per statement for both above and below the thresholds $\mu = (0.25, 0.5, 0.75, 1.0)$ and $\lambda = \{0.25, 0.5, 0.75, 1.0\}$. We find that there is a statistically and practically significant difference between the mean number of bug-fixes for both measures at all thresholds selected (though with classes perfect statement coverage strangely becomes a predictor of more faults). Note that for individual statements, all thresholds based on statement coverage are equivalent (coverage is always 0 or 1).
Table 7 shows mutant threshold results if we first remove all program entities that are not covered. This has little impact on the ability of thresholds to predict bug-fixes.
4.4 Complexity and Change
We compare the number of mutants, normalized by the size of the program element (e.g. number of lines), to the number of post-epoch bug-fixes for that element.
- **Statements**: Comparing the number of bug-fixes to the number of mutants per statement, we find that the 95% confidence interval is $(-0.004697, 0.013204)$, $p > 0.01$.
- **Methods**: Comparing the number of bug-fixes to the number of mutants per method, we find that the 95% confidence interval is $(-0.087117, -0.048715)$, $p < 0.01$.
- **Classes**: Comparing the number of bug-fixes to the number of mutants per class, we find that the 95% confidence interval is $(-0.096285, -0.00863)$, $p > 0.01$.
Summary: Most results are not statistically significant. Further, there is a weak correlation between the number of mutants (normalized) and the number of bug-fixes. More “complex” code as measured by number of mutations has slightly fewer bug-fixes, but the correlation is even weaker than between testedness measures and bug-fixes. However, the difference in correlation is not very large, so another way to interpret this is that as a continuous measure, simple number of mutants, normalized, is only slightly worse as a predictor of bug-fixes than “testedness.” However, unlike testedness measures, the number of mutants does not provide a useful binary predictor for bug-fixes. Binary splits based on a threshold using the mean normalized mutants (2.79) do not produce significantly different populations. Setting a threshold of 5 or more normalized mutants does produce significant differences ($p < 0.0001$), but the means are very similar, e.g. 1.1 bug-fixes for less complex statements vs. 0.95 bug-fixes for statements with more mutants.
4.5 Complexity and Testedness
Comparing the normalized number of mutants to the mutation score per program element:
- **Statements**: The 95% confidence interval is $[0.008016, 0.025012]$, $p < 0.01$.
- **Methods**: The 95% confidence interval is $[0.005755, 0.044311]$, $p > 0.01$.
- **Classes**: The 95% confidence interval is $[-0.049426, 0.046223]$, $p > 0.01$.
Summary: At the statement level (only) there is a statistically significant but very weak correlation between the number of mutants (normalized) and the mutation score. More complex statements are (very slightly) more well-tested.
5. DISCUSSION
Our empirical results have some potentially important consequences for testing research and practice.
5.1 The Danger of Relying on Small Testedness Differences
First, there is only a weak correlation between either statement coverage or mutation score and future bug-fixes. This indirectly suggests that research efforts using coverage or mutants to evaluate test suite selection, generation, or reduction algorithms may draw unwarranted conclusions from small, significant differences in these measures. In particular, it may suggest that using mutation to evaluate testing experiments can potentially fail to reflect the ability of systems to detect the types of faults that are detected by practitioners and worth correcting in real-life. Given that the literature supporting the value of code coverage as a predictor of fault detection mostly relies on the ability of mutation testing to reflect real fault detection, and that mutation testing’s effectiveness is validated by only a small number of studies, none of which present overwhelming evidence over a large number of programs, we strongly suggest that testing experiments, whenever possible, should rely on the use of some real faults in addition to coverage or mutation-score based evaluations. In some contexts, where detecting all possible faults is the goal (e.g., safety critical systems) and the oracle for correctness is known to be extremely good, mutation-based analyses may be justified, but even in those cases data based on real faults would be ideal.
5.2 Practical Application of Thresholds
On the other hand, our results show that numerous simple percentage thresholds for statement coverage and mutation score can, in a statistically significant way, predict the number of bug-fixes (with mean differences between populations of about 2x). This suggests a simple method for prioritizing testing targets in a program. The entities with the highest bug-fix counts were, unsurprisingly, those not even covered by any tests. As a first priority, covering uncovered program elements is likely to be the most rewarding way to improve testedness, since these elements can be expected to have the most potential undetected bugs that will be revealed in the near future. Surviving mutants of entities with low mutation scores can then be used to guide further testing. One obvious question is, which threshold should be used, since many thresholds seem effective? Our data shows that it really does not matter much — the significance and even average bug-fixes are not radically different for different thresholds. The simplest answer is to start with low thresholds, keep improving testing until there are no remaining interesting elements below the current threshold, then move on to a higher threshold. Setting a particular threshold for project-level testing is not supported by our data, however, as there is no clearly “best” dividing line, only a number of ways to define “less tested” and “more tested” elements, most of which equate to more bug-fixes for less tested elements.
Table 5: Mutation score thresholds
<table>
<thead>
<tr>
<th></th>
<th>(a) 0.25</th>
<th>(b) 0.5</th>
<th>(c) 0.75</th>
<th>(d) 1.0</th>
</tr>
</thead>
<tbody>
<tr>
<td>Statements</td>
<td>(\mu \geq 0.25)</td>
<td>(\mu < 0.25)</td>
<td>(\mu \geq 0.5)</td>
<td>(\mu < 0.5)</td>
</tr>
<tr>
<td>Blocks</td>
<td>0.60</td>
<td>1.20</td>
<td>0.00</td>
<td>0.60</td>
</tr>
<tr>
<td>Methods</td>
<td>0.39</td>
<td>0.81</td>
<td>0.00</td>
<td>0.39</td>
</tr>
<tr>
<td>Classes</td>
<td>0.32</td>
<td>0.87</td>
<td>0.00</td>
<td>0.33</td>
</tr>
</tbody>
</table>
Table 6: Statement coverage score thresholds
<table>
<thead>
<tr>
<th></th>
<th>(a) 0.25</th>
<th>(b) 0.5</th>
<th>(c) 0.75</th>
<th>(d) 1.0</th>
</tr>
</thead>
<tbody>
<tr>
<td>Statements</td>
<td>(\lambda \geq 0.25)</td>
<td>(\lambda < 0.25)</td>
<td>(\lambda \geq 0.5)</td>
<td>(\lambda < 0.5)</td>
</tr>
<tr>
<td>Blocks</td>
<td>0.68</td>
<td>1.20</td>
<td>0.00</td>
<td>0.68</td>
</tr>
<tr>
<td>Methods</td>
<td>0.42</td>
<td>0.83</td>
<td>0.00</td>
<td>0.42</td>
</tr>
<tr>
<td>Classes</td>
<td>0.40</td>
<td>0.87</td>
<td>0.00</td>
<td>0.41</td>
</tr>
</tbody>
</table>
5.3 Complexity, Bug-Fixes, and Testedness
There does not seem to be any very strong or interesting relationship between complexity (as measured by number of mutants) and bug-fixes, or between complexity and testedness. More complex code is (very slightly) less fixed, perhaps because it is (very slightly) more tested. The main take-away from the complexity analysis is that the number of mutants is almost as good a predictor of lack of bug-fixes as testedness, if used as a simple correlation, but it does not support useful binary distinctions in likely bug-fixes.
5.4 Testing Is Likely Effective
One final point to note is that our data provides fairly strong support for the idea that testing is effective in forcing quality improvements in code. Our measures of testedness are, essentially, based purely on the dynamic properties of a test suite, not on static properties of program elements (the number of mutants for an entity depends on static properties, but all statements with any mutants can achieve or fail to achieve a score of any particular threshold). This means that, without using the static properties of code, the degree to which code is exercised in a test suite can often be used to predict which of two entities will turn out to require more bug-fixes. As far as we can determine, there are only a few potential causes for this ability to use the dynamics of a test suite to predict bug-fixes:
1) Some unknown property not related to code quality results in both a tendency to write tests that cover code and in fewer bug-fixes for that code. 2) A known property results in both a tendency to write tests that cover code and in fewer bug-fixes for that code: namely, good developers write tests for their already more correct code. Testing itself is more a sign of good code than a cause of good code. Tests covering code often detect bugs, and developers fix the bugs, so the code has fewer bugs to fix.
The first possibility is, in our opinion, unlikely — it is difficult to imagine such an unknown factor. Some obvious candidate factors do not really bear up on examination. For example, perhaps code with many bug-fixes is new code, and so has not yet had tests written for it. If the act of writing tests for the new code makes it less buggy, however, then testing is in fact effective. Moreover, the predictive power of mutation score being over a threshold is present even if we restrict our domain to entities that have at least one covering test. New code might be expected to be completely untested, removing most truly new (no tests) code from this population. The second possibility is more plausible, and may well be true to some extent. The third possibility seems most plausible, and we believe is likely to be the main cause of the observed effects. However, even if we assume that the second explanation is the primary cause for the relationships we observed, notice the peculiar consequences of this claim: developers who believe testing is worthwhile, and devote more time to it, are “wrong” in that testing itself is useless, but on the whole produce statistically better code than those who do not value testing. This may not be an appealing argument to those dubious about testing’s value.
While it could be argued that other measures of testedness such as warnings generated by static analysis tools could be an even better indicator, we believe that the number of bugs fixed is the most direct measure of testedness available.
6. Threats to Validity
Threats Due to Sampling Bias: To ensure representativeness of our samples, we opted to use search results from the Github repository of Java projects that use the Maven build system. We picked all projects that we could retrieve given the Github API, and selected from these only based on necessary constraints (e.g., the project must build, and tests at epoch must pass). However, our sample of programs could be biased by skew in the projects returned by Github. Github’s selection mechanisms favoring projects based on some unknown criteria may be another source of error. We also handpicked some projects from Apache, such as commons-lang. As our samples only come from Github and Apache, this may be a source of bias, and our findings may be limited to open source programs. However, we believe that the large number of projects more than adequately addresses this concern.
Bias Due to Tool Used: For our study, we relied on PIT. We have done our best to extend PIT to provide a reasonably sufficient set of mutation operators, ensuring also that the mutation operators were non-redundant (and have checked for redundancy in past work using PIT).
Secondly, we used the Gumtree algorithm discussed earlier for tracking program elements across commits. However, the
algorithm used is unable to track program elements across
renames or movement to another folder. Further, refactoring
that involves modification of scope, such as moving the code
out of the current compilation unit also causes the algorithm
to lose track of the program element after refactoring.
Further, in this study we did not apply a systematic method
for the detection and removal of equivalent mutants. This
might have an impact on the mutation score of some projects.
**Bias Due to Commit Classification:** Our determination
of commits as bug-fixes or not and of commits that “end the
history” of a program element both depend on a learned
classifier. While our results do not require those results to be
anywhere near perfect, it may be that some unknown bias
in the failures unduly influences our results, or gives rise to
the weakness of observed correlations.
**Bias Due to Lack of High Coverage:** Some researchers
have found that a strong relationship between coverage and
effectiveness does not show up until very high coverage levels
are achieved [15, 17, 24]. Since the coverage for most projects
rarely reached very high values, it is possible that we missed
the existence of such a dependent strong relationship.
**Bias Due to Confounding Factors:** Numerous confounding
factors exist. For example, we assume that there is no
specific skew in the individuals responsible for the bug fixes,
and other personality factors in projects do not come into
play. However, this cannot be guaranteed. Next, bug fixes
may be done under various circumstances. For example the
quality of a bug fix under time pressure may be very differ-
ent from the quality of a bug fix under more leisure. Finally,
we do not consider the changes to the test cases themselves.
However, we believe that the impact of these factors are
limited due to the large number of subjects considered.
### 7. CONCLUSION
This paper uses a novel method to evaluate the effective-
ness of test suite quality measurements, which, we suggest
essentially aim to capture the “testedness” of a program or
program elements. Much of previous research attempting to
evaluate such measures operates by a procedure that, at a
suitably high level of abstraction, can be described as first
collecting a large set of tuples of the form (testedness mea-
sure for suite, # faults found by suite), then applying
some kind of statistical analysis. Details vary, in that
suites may all be for one SUT, or for multiple SUTs (though
seldom for more than 5-10 SUTs), and in most cases “ac-
tual faults” are either hand-seeded or “faults” produced by
mutation testing (which is assumed to measure real fault
detection on a largely recently established and still limited
empirical basis [27]). These studies have produced a variety
of results, sometimes almost contradictory [22]. Is coverage
useful? Is mutation score (more) useful?
We propose a different approach. Measuring fault detec-
tion for a suite can be extremely labor-intensive; worse, de-
pending on the definition of faults, we may give too much
credit for detecting faults that are of little interest to most
developers. Instead, our evaluation chooses a point in time,
collects testedness measures for a passing test suite from
that date, and then examines whether these measures pre-
dict actual future bug-fixes for program elements. If “well
tested” elements of a program require no less effort to cor-
tect, then either we are not measuring testedness effectively,
or testing itself is ineffective.
We assume that testing is effective. Under this assump-
tion, we show that there is the expected negative correlation
between testedness and number of future bug-fixes. How-
ever, this correlation is so weak that it makes using it to
compare testedness values in the continuous fashion, where
slightly more tested code is assumed to be slightly better, or
slightly higher scoring test suites are assumed to be better
than slightly lower scoring test suite, a dubious enterprise.
This suggests that the evaluation method in many software
testing publications may be of questionable value. On the
other hand, when we use testedness measures to split pro-
gram elements into simple “more tested” and “less tested”
groups, the population differences are typically significant
and the mean bug-fixes are sufficiently different (usually
about a factor of 2x) to provide practical guidance in testing.
So, is (statement) coverage useful? Is mutation score
(more) useful? The answers, we believe, may be that it
depends on what you expect to achieve using these methods.
Testing is an inherently noisy and idiosyncratic process, and
whether a suite detects a fault depends on a large number
of complex variables. It would, given this complexity of
process, be very surprising if any simple dynamic measure
computable without human effort for any test suite pro-
duced strong correlations like those often shown between
code coverage and mutation score (0.6-0.9). The correla-
tions between these measures are often high because both
result from regular, even-handed, automated analysis of the
dynamics of a test suite. Actual faults are apparently (un-
surprisingly) produced and detected by a much more com-
plex and irregular process. However, when used to draw the
line between less tested and more tested program elements,
testedness measures can provide a simple automated way
to prioritize testing effort, and recognize when all the ele-
ments of an SUT have passed beyond a high threshold of
testedness, and are thus likely to have fewer future faults.
In short, while we cannot (at present) measure testedness
as precisely as we (software engineering researchers) would
like, we can measure testedness in such a way as to provide
some practical assistance to the humble working tester.
**Acknowledgements:** A portion of this research was funded
by NSF CCF-1054876.
<table>
<thead>
<tr>
<th>(a) 0.25</th>
<th>(b) 0.5</th>
<th>(c) 0.75</th>
<th>(d) 1.0</th>
</tr>
</thead>
<tbody>
<tr>
<td>$\mu \geq 0.25$</td>
<td>$\mu < 0.25$</td>
<td>$\mu \geq 0.5$</td>
<td>$\mu < 0.5$</td>
</tr>
<tr>
<td>p</td>
<td>p</td>
<td>p</td>
<td>p</td>
</tr>
<tr>
<td>Statements</td>
<td>0.60</td>
<td>1.16</td>
<td>0.00</td>
</tr>
<tr>
<td>Blocks</td>
<td>0.39</td>
<td>0.72</td>
<td>0.00</td>
</tr>
<tr>
<td>Methods</td>
<td>0.32</td>
<td>0.90</td>
<td>0.00</td>
</tr>
<tr>
<td>Classes</td>
<td>0.11</td>
<td>1.66</td>
<td>0.00</td>
</tr>
</tbody>
</table>
8. REFERENCES
|
{"Source-Url": "http://web.engr.oregonstate.edu/~alex/fse16.pdf", "len_cl100k_base": 12519, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 41972, "total-output-tokens": 16758, "length": "2e13", "weborganizer": {"__label__adult": 0.0003995895385742187, "__label__art_design": 0.0003440380096435547, "__label__crime_law": 0.00032019615173339844, "__label__education_jobs": 0.001392364501953125, "__label__entertainment": 5.5849552154541016e-05, "__label__fashion_beauty": 0.00017774105072021484, "__label__finance_business": 0.000186920166015625, "__label__food_dining": 0.0002970695495605469, "__label__games": 0.0006356239318847656, "__label__hardware": 0.0005993843078613281, "__label__health": 0.00045013427734375, "__label__history": 0.00021028518676757812, "__label__home_hobbies": 8.434057235717773e-05, "__label__industrial": 0.0002644062042236328, "__label__literature": 0.00038313865661621094, "__label__politics": 0.00021338462829589844, "__label__religion": 0.00042629241943359375, "__label__science_tech": 0.00884246826171875, "__label__social_life": 9.66191291809082e-05, "__label__software": 0.004550933837890625, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.0002949237823486328, "__label__transportation": 0.0003964900970458984, "__label__travel": 0.0001723766326904297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67718, 0.0407]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67718, 0.41373]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67718, 0.92928]], "google_gemma-3-12b-it_contains_pii": [[0, 4406, false], [4406, 11754, null], [11754, 18439, null], [18439, 25796, null], [25796, 31400, null], [31400, 34114, null], [34114, 39487, null], [39487, 46089, null], [46089, 52342, null], [52342, 59187, null], [59187, 64685, null], [64685, 67718, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4406, true], [4406, 11754, null], [11754, 18439, null], [18439, 25796, null], [25796, 31400, null], [31400, 34114, null], [34114, 39487, null], [39487, 46089, null], [46089, 52342, null], [52342, 59187, null], [59187, 64685, null], [64685, 67718, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67718, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67718, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67718, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67718, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67718, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67718, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67718, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67718, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67718, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67718, null]], "pdf_page_numbers": [[0, 4406, 1], [4406, 11754, 2], [11754, 18439, 3], [18439, 25796, 4], [25796, 31400, 5], [31400, 34114, 6], [34114, 39487, 7], [39487, 46089, 8], [46089, 52342, 9], [52342, 59187, 10], [59187, 64685, 11], [64685, 67718, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67718, 0.13354]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
03b0c1b8a437ffde2cd6572667dc81a0850b6797
|
A computability perspective on self-modifying programs
Guillaume Bonfante, Jean-Yves Marion, and Daniel Reynaud-Plantey
Nancy University – LORIA,
615, rue du Jardin Botanique, BP-101, 54602 Villers-l`es-Nancy, France
Abstract—In order to increase their stealth, malware commonly use the self-modification property of programs. By doing so, programs can hide their real code so that it is difficult to define a signature for it. But then, what is the meaning of those programs: the obfuscated form, or the hidden one? Furthermore, from a computability perspective, it becomes hard to speak about the program since, its own code varies over time. To cope with these issues, we provide an operational semantics for self-modifying programs and we show that they can be constructively rewritten to a non-modifying program.
Keywords-Self-modifying code, semantics, computability, virus, obfuscation
I. INTRODUCTION
Self-modifying programs are programs which are able to modify their own code at runtime. Nowadays, self-modifying programs are commonly used. For example, a packer transforms any program into a program with equivalent behavior, but which decompresses and/or decrypts some instructions. Thus, packers transform programs into self-modifying programs. Another example of self-modifying programs are just-in-time compilers.
Self-modifying techniques allow obfuscation of codes, thus protecting the intellectual property of the program authors. Besides of these positive applications, malware heavily use self-modification to armour themselves and to avoid detection, and so throw the self-modification paradigm in the dark side of programming.
There are lots of reasons to study self-modifying programs from both a theoretical and a practical point of view. One reason is to be able to have a good understanding of what can be done with self-modifying programs. Another reason is to provide tools to analyse them in the context of malware. We may foresee difficulties of such an analysis by reading for example the introduction of [1]. As opposed to traditional programs, we do not have a static access to the instructions of a self-modifying program. That is why, we shall introduce pseudo-programs, that is programs for which we just have a fragment of the listing (corresponding to the current step of a computation). Indeed, a self-modifying program may write and run some new code and it cannot be predicted a priori without execution. So we just have a partial view of the code. In short, runtime analysis is very hard even for trained professional reverse engineer but currently remains the only practical approach. On the other hand, we are not aware of any effective static analysis for self-modifying programs. This situation certainly comes from the lack of studies on self-modifying constructions. To our knowledge, there are only a few scientific papers on this topic, and without being exhaustive, we may mention: [2] which proposes an axiomatic semantics, and [3] which tries to provide a semantics.
More recently in [4], we developed a dynamic type system and a tool, TraceSurfer, in order to analyse self-modifying binary programs, to recognize packer signatures and to establish some non-interference like properties on binary code. TraceSurfer outputs a view of the relations between layers of dynamic code (monitoring, generation, secure erasing). We have observed that the strategies of the virus writers are sophisticated. This is one of our motivation for a deep analysis of self-modifying programs.
This study is an attempt to contribute to the understanding of self-modifying programs. For this, we provide some semantics. Next to the traditional approaches, operational, axiomatic and denotational semantics, we claim that deobfuscation also plays the role of a semantics. Obfuscation usually hides the real code of a program by transforming it according to some rules. In some way, the real code still exists, but in a hidden form. Deobfuscation then consists in rediscovering the initial code within the fog.
Our contribution is to show that classical computability results may give a better understanding of self-modifying programs and deobfuscation. This study follows the spirit of the works of Jones [5], [6]. Our main result is a constructive interpretation of Rogers’s isomorphism theorem. The original result says that given two (acceptable) programming languages, there is an effective isomorphism between both languages. In our context, we use Rogers’s construction to define a computability semantics of a self-modifying program.
II. AN ABSTRACT ASSEMBLY LANGUAGE
One point here is about the design of the assembly language. In a ”real” machine language, addresses and values are encoded say by 32-bit words and so they are finite. Let us cite Jones at that point “We here have a paradoxical situation: that the most natural model of daily computing on computers, which we know to be finite, is by an infinite (i.e., potentially unbounded) computation
model". For this reason, we use an infinite model, however, we have tried to keep things as finite as possible. We use finitely many registers, finitely many instructions (of fixed size) and memory cells contain only one letter. But, in order to deal with unbounded addresses, since they are stored in registers, we allow the content of these registers to be unbounded.
The rationale of our model of a machine is to put the focus on one (unilateral) infinite storage tape, where instructions are loaded, executed and potentially transformed. Registers serve only for the computation of intermediate values and for the storage of the current instruction address.
In that sense, our model differs from the usual random access machine (RAM) model which puts efforts on registers (in particular, register machines employ a denumerable set of registers). As a matter of fact, our model is closer to a counter machine (CM). Since we have only a finite number of registers, our model is less powerful than a RAM. On the other hand, it is closer to the functioning of current computers.
Anyway, what makes the present model different from these two standard models is that we store the program within the configuration, not in an idealized stable world. Consequently, usual simulations of (say) Turing Machines by RAM, and all classical results and notions (such as specializers, self-interpretation, padding, Kleene’s fixpoint Theorem and so on) must be reconsidered in the present context.
A. The syntax
Let \( B \) be a finite set of letters modelling bytes. \( B^* \) denotes the set of finite words over \( B \). We call elements in \( B^* \) addresses or pointers. From now on, we suppose that there is a blank character \( \square \in B \). On \( B^* \), we use the following operations.
- \(|w|\) denotes the size of words.
- The concatenation operation is written with a dot.
- Given a word \( w \), we denote by \( w_i \) its \( i \)-th letter, begining with index 0, that is \( w = w_0.w_1 \cdots w_{|w|−1} \).
Furthermore, we suppose given an arithmetic on pointers by means of an isomorphism between \( (B\setminus\{\square\})^* \) and \( \mathbb{N} \), let us say via \( \iota : (B\setminus\{\square\})^* \rightarrow \mathbb{N} \). Then, \( \iota^{-1}(0) \) is the initial address, \( \iota^{-1}(\iota(w)+1) \) returns the “next” address, etc. To avoid tedious notations, we will no longer make a clear distinction between addresses and natural numbers, and we will write \( w + k \) where \( w \) is an address and \( k \) an integer.
The context shows what is going on. We extend \( \iota \) to words \( w \in B^* \), saying that \( \iota(w^n) = \iota(w)^n = \iota(u) \) with \( u \in (B\setminus\{\square\})^* \) and \( n \in \mathbb{N} \). In other words, a \( \square \) used as prefix or suffix is transparent for \( \iota \). As a matter of fact, one will have observed that such an arithmetic is largely used in low-level programming languages.
Finally, let \( R \) be a finite (non empty) set of registers. Without loss of generality, one of these registers is \( \mathbb{B} \), the instruction pointer. The choice of the other registers belongs to the design of the framework (the machinery).
A function \( \rho : R \rightarrow B^* \) is named a register valuation and a function \( \sigma : B^* \rightarrow B \) is called a store, \( \mathcal{S} \) denotes the set of stores. In the present settings, we do not introduce the notion of stack. This could be done without harm.
The function \( 0 : R \rightarrow B^* \) is the constant function \( r \mapsto 0 \). For stores, \( \square : B^* \rightarrow B \) is the function \( w \mapsto \square \). We introduce an update function on stores. Given \( \sigma : B^* \rightarrow B \), \( k \in \mathbb{N} \) and a word \( w \in B^* \), we write \( \sigma[k ← w] \) for the store:
\[
\sigma[k ← w] : B^* \rightarrow B
v \mapsto \begin{cases}
\sigma(v) & \text{if } v < k \text{ or } v \geq k + |w| \\
w_i & \text{if } v = k + i
\end{cases}
\]
In other words, looking at the store as a tape, it means that one writes the word \( w \) from the index \( k \). Finally, we use the notation \( \sigma(m..n) \) where \( m \leq n \in \mathbb{N} \) for the word \( \sigma(m), \sigma(m + 1), \ldots, \sigma(n) \). If \( n < m \), then \( \sigma(m..n) \) is the empty word.
The abstract assembly language (ASL) is:
\[
\begin{align*}
\text{LOAD} & \ r \ r \ r \\
\text{CPY} & \ r \ r \ r \\
\text{MOV} & \ r \ r \\
\text{JUMP} & \ r \\
\text{STOP} & \\
\text{LSHIFT} & \ r \ r \\
\text{RSHIFT} & \ r \ r \\
\text{LCCAT} & \ l \ r \\
\text{RCCAT} & \ l \ r \\
\text{OP} & \ r \ r \ r \ \text{OP} & \ r \ r \ r \\
\text{NOT} & \ r
\end{align*}
\]
with \( r \) and \( l \) respectively register names and letters and \( \text{OP} \in \{\text{ADD, SUB, MUL, DIV, MOD, CCAT, EQ, LEQ, AND, OR}\} \).
The concrete syntax of the assembly language (CAL) is an encoding of the ASL with words in \( B^* \). To avoid being too abstract, we provide now such an encoding. However, one should keep in mind that we essentially use only one feature of this encoding: the language of instructions must be prefix, that is there are no words \( w_1, w_2 \in \text{CAL} \) such that \( w_1 = w_2.u \) with \( u \in B^* \). The reason is that instructions are encoded in memory. Therefore, at one address in the store, there should be no ambiguity on the current instruction to be executed.
Let us consider words \( \text{mov, l_shift, add, ...} \in (B\setminus\{\square\})^* \) to encode the ASL lexemes\(^2\). Registers are encoded in the same way by words \( \mathbb{B}, \mathbb{A}, \mathbb{P}, \ldots \in (B\setminus\{\square\})^* \) which are taken to be different from the latter ones.
By a clever choice of the encoding words, we can suppose that the encoding of ASL instructions is a prefix language. Moreover, we can even suppose that encoded instructions all have the same size, say \( K \).
\(^1\) Actually, since definitions are totally relative to \( \iota \), the isomorphism could not be computable. However, in order to provide a concrete implementation of the machine, we require it to be so.
\(^2\) One could use \( B^* \), but this condition ensures that non-self modifying programs can be written in \( (B\setminus\{\square\})^* \), a property used for Theorem 11.
Due to the fact that CAL is prefix, the ternary relation \( \text{instr} \) defined below is actually a (partial) function \( S \times B^* \to CAL \):
\[
(\sigma, k, w) \in \text{instr} \iff \sigma(k, k + |w| - 1) = w.
\]
Thus, we will write \( \text{instr}(\sigma, k) \) to mention the unique instruction \( w \) such that \( \text{instr}(\sigma, k, w) \) if such an instruction exists. Otherwise, we write \( \text{instr}(\sigma, k) = \bot \).
Traditionally, a program has a fixed text. Its code is a list of instructions invariant wrt any run, on which analyses can be performed. In the context of self-modifying programs, the situation is different because we don’t have access to the whole code. The code evolves during a computation and may depend on the input. So we introduce the notion of pseudo-program.
**Definition 1.** A pseudo-program is a piece of text \( p \in B^* \) which potentially contains the code which will be executed. To distinguish pseudo-programs from arbitrary strings, we use a type writer font and we use \( P \) as an alias for \( B^* \) for the set of pseudo-programs.
Contrary to what happens in the usual case, one cannot make a clear distinction between instructions and data since some data may become instructions after being rewritten and vice versa. So, we cannot define a pseudo-program to be a string in \( CAL^* \) which would be the natural presentation for non-self-modifying programs.
### III. Operational Semantics
A configuration is given by a couple \((\rho, \sigma)\) where \(\rho\) is a register valuation and \(\sigma\) a store. Configurations characterize the states of the machine.
**Definition 2** (Operational Semantics). The successor relation on configurations is defined in Figure 1. As usual, we write \((\rho, \sigma) \rightarrow^n (\rho', \sigma')\) the fact that \((\rho, \sigma) = (\rho_0, \sigma_0) \rightarrow (\rho_1, \sigma_1) \rightarrow \cdots \rightarrow (\rho_n, \sigma_n) = (\rho', \sigma')\) and \(\rightarrow^*\) is the transitive closure of \(\rightarrow\).
One may have observed that indirect addressing is done via the store. Since we have only finitely many registers, we can name them directly. However, to denote some particular window in the memory, we use two registers, one for the beginning and one for the length of the window.
From the remaining of the section, we suppose given a the domain of computations \( \Sigma^* \) where \( \Sigma \subseteq B \). In particular, one will have observed that pseudo-programs (by making \( \Sigma = B \)) can be used as data of some other programs.
Given a pseudo-program \( p \in P \) and \( k \) words \( w_1, \ldots, w_k \in \Sigma^* \), the initial configuration (for these words) is defined as \( c_0(p, w_1, \ldots, w_k) = (0, \square[0 \leftarrow p, |p| + 1 \leftarrow w_1, \square w_2, \cdots \square w_k]) \).
A function \( \phi : (\Sigma^*)^k \rightarrow \Sigma^* \) is computed by a pseudo-program \( p \) if for all \( w_1, \ldots, w_k \in \Sigma^* \), we have \( c_0(p, w_1, \ldots, w_k) \rightarrow (\rho_1, \sigma_1) \rightarrow \cdots \rightarrow (\rho_n, \sigma_n) \) where a) \( \text{instr}(\rho_n(ip)) = \text{stop} \) and b) \( \rho(out) = \phi(w_1, \ldots, w_k) \) with \( out \) a given and fixed register. Conversely, a program \( p \) computes the unique function \( \phi : B^* \rightarrow B^* \) such that:
- \( \phi(x) = \rho(out) \) if one has \( c_0(p, w_1, \ldots, w_k) \rightarrow (\rho_1, \sigma_1) \rightarrow \cdots \rightarrow (\rho_n, \sigma_n) \) and \( \text{instr}(\rho_n(ip)) = \text{stop} \).
- \( \phi(x) \) is otherwise undefined.
This function is written \( [p] \).
**A. Some examples**
For the notation of programs, we use the semi-column instead of the dot to denote the concatenation of words.
**Example 1.** Let us introduce some syntactic sugar. Given a word \( w \), we define:
\[
\begin{align*}
\text{l_ccat } w \ r & \triangleq \text{l_ccat } w_{|w|-1} \ r; \\
\text{l_ccat } w_{|w|-2} \ r; \\
\vdots \\
\text{l_ccat } w_0 \ r
\end{align*}
\]
To test if a register \( r \) equals some word \( w \in B^* \) and jump otherwise to the content of the register \( p \), we use:
\[
\begin{align*}
\text{test}_w \ r \ p & \triangleq \text{l_ccat } w \ \text{tp1} ; \\
& \quad \text{eq } r \ \text{tp1} ; \\
& \quad \text{test } \text{tp1} \ p
\end{align*}
\]
where \( \text{tp1} \) is a temporary register.
The following program computes the length of its first argument (written in \( (B\backslash\{\square}\})^* \)).
**Example 2.** Registers are \( r, s, \text{ap}, \text{out}, \text{b}, \text{p} \).
\[
\begin{align*}
\text{length} & \triangleq \text{l_ccat } k_1 \ r \ r \ \text{l_ccat } k_2 \ s; \\
& \quad \text{l_ccat } k_3 \ \text{b}; \text{l_ccat } \text{ap} \ ; \\
& \quad \text{load } r \ p ; \text{test} \square p \ s ; \\
& \quad \text{add } \text{ap} \ \text{out} \ ; \text{add } \text{ap} \ r \ r \\
& \quad \text{jump } \text{b}; \text{stop}
\end{align*}
\]
where \( k_1 \) is the address of the argument (that is the size of the program plus one), \( k_2 \) is the address of the instruction \( \text{stop} \) and \( k_3 \) is the address of the instruction \( \text{load } r \ p \).
We present in Appendix A a technique to compute the \( k_i \)'s. This technique will be used later on and, in particular, in Proposition 8.
**B. The robustness of the model**
One first point deals with the computational cost of each step of computation. Reading the instructions can be done in constant time, indeed, we took the precaution to encode instructions with words of size equal to a constant \( \mathcal{K} \). As this happens for RAM, the unit cost of operations on registers depends on the size of the content of these registers. We refer
\[
\rho(\text{ip}) = k \quad \text{instr}(\sigma, k) = \text{load} \ r_1 \ r_2 \ r_3 \quad \rho(r_1) = n \quad \rho(r_2) = \delta \quad \sigma(m.(m + \delta)) = w \quad k + |\text{instr}(\sigma, k)| = m
\]
\[
(\rho, \sigma) \rightarrow (\rho[\text{ip} \leftarrow m, r_3 \leftarrow w], \sigma)
\]
\[
\rho(\text{ip}) = k \quad \text{instr}(\sigma, k) = \text{mov} \ r_1 \ r_2 \quad \rho(r_1) = w \quad \rho(r_2) = n \quad k + |\text{instr}(\sigma, k)| = m
\]
\[
(\rho, \sigma) \rightarrow (\rho[\text{ip} \leftarrow m], \sigma[n \leftarrow w])
\]
\[
\rho(\text{ip}) = k \quad \text{instr}(\sigma, k) = \text{op} \ r_1 \ r_2 \ r_3 \quad \text{op}(\rho(r_1), \rho(r_2)) = w \quad k + |\text{instr}(\sigma, k)| = m
\]
\[
(\rho, \sigma) \rightarrow (\rho[\text{ip} \leftarrow m, r_3 \leftarrow w], \sigma)
\]
where \( \text{op} \in \{\text{add}, \text{sub}, \text{mul}, \text{div}, \text{mod}, \text{ccat}, \text{eq}, \text{leq}, \text{and}, \text{or} (*\}) \)}
\[
\text{instr}(\sigma, \rho(\text{ip})) = \text{l_shift} \ r_1 \ r_2 \quad \rho(\text{ip}) = l.w \quad \rho(\text{ip}) + |\text{instr}(\sigma, \rho(\text{ip}))| = m^{(**)}
\]
\[
(\rho, \sigma) \rightarrow (\rho[\text{ip} \leftarrow m, r_1 \leftarrow w, r_2 \leftarrow l], \sigma)
\]
\[
\text{instr}(\sigma, \rho(\text{ip})) = \text{l_ccat} \ l \ r \quad \rho(r) = w \quad \rho(\text{ip}) + |\text{instr}(\sigma, \rho(\text{ip}))| = m
\]
\[
(\rho, \sigma) \rightarrow (\rho[\text{ip} \leftarrow m, r \leftarrow l.w], \sigma)
\]
\[
\rho(\text{ip}) = k \quad \text{instr}(\sigma, k) = \text{test} \ r_1 \ r_2 \quad \rho(r_1) = \top \quad k + |\text{instr}(\sigma, k)| = m
\]
\[
(\rho, \sigma) \rightarrow (\rho[\text{rho}[\text{ip} \leftarrow m], \sigma]
\]
\[
\rho(\text{ip}) = k \quad \text{instr}(\sigma, k) = \text{test} \ r_1 \ r_2 \quad \rho(r_1) = \bot \quad \rho(r_2) = m
\]
\[
(\rho, \sigma) \rightarrow (\rho[\text{rho}[\text{ip} \leftarrow m], \sigma]
\]
\[
\rho(\text{ip}) = k \quad \text{instr}(\sigma, k) = \text{jump} \ r \quad \rho(r) = m
\]
\[
(\rho, \sigma) \rightarrow (\rho[\text{ip} \leftarrow m], \sigma)
\]
\[
\rho(\text{ip}) = k \quad \text{instr}(\sigma, k) = \text{not} \ r \quad \rho(r) = b
\]
\[
(\rho, \sigma) \rightarrow (\rho[r \leftarrow b], \sigma)
\]
\[
\rho(\text{ip}) = k \quad \text{instr}(\sigma, k) = \text{cpy} \ r_1 \ r_2 \quad \rho(r_1) = w
\]
\[
(\rho, \sigma) \rightarrow (\rho[r_2 \leftarrow w], \sigma)
\]
\[
(*) \text{ Substraction is defined on natural numbers as } \text{sub}(n, m) = \max(0, n - m). \text{ Concatenation is ccat } (u, v) = u.v. \text{ For binary operations, there are two (arbitrary) values } \bot, \top, \text{ respectively for } \text{“true” and “false”}. \text{ Usually, } 0 \text{ serves as false, and otherwise, the value is considered as true. } (**) \text{ where } l \in B. \text{ The l_shift operation on the empty word returns the empty word. The rule for r_shift is analogous, and has been omitted, so is the rule for r_ccat. In case of an instruction op r_1 r_2 ip, we put the priority on the normal flow, that is ip = k + |instr(\sigma, k)| = m after the instruction. The same remark holds for l_shift, l_ccat ,...}
\]
Figure 1. The rules of the operational semantics
to Jones [6] for a full discussion about these issues. Anyway, complexity theory is outside the scope of this paper, so that we take the simplest notion of time complexity: the computation length.
**Definition 3** (Time complexity). Let \( p \in P \), we define time(p(x_1, ..., x_n)) to be its computation length. That is: time(p, x_1, ..., x_n) = \min\{k \in \mathbb{N} | c_0(p, x_1, ..., x_n) \rightarrow^k (\rho, \sigma)\}, \text{ where instr}(\rho(\text{ip})) = \text{stop}, and is undefined otherwise.
We say that a program \( p \) is static if for all computations \( c_0(p, w_1, ..., w_k) \rightarrow^n (\rho_n, \sigma_n) \),
- \( \sigma_n(0..(|p| - 1)) = p \).
- the current instruction is an instruction of \( p \), that is \( \text{instr}(\rho_n) < |p| \).
In other words, the text of the program remains unchanged, and the instruction pointer never goes outside the program.
**Proposition 4.** There is a constant \( R \) such that any static program for a machine with \( k + 1 \) registers working in time \( T(n) \) (n is the size of the input) can be simulated by a static program written for a machine with \( R \) registers and a computation time \( O(T(n)) \).
**Proof:** The principle of our simulation is to use 7 registers to encode the \( k + 1 \) registers \( r_1, ..., r_k \) of the simulated pseudo-program \( p \). Let us call them ip, m, n, p, r, tp0, tp1.
- \( m \) contains the maximal length of the registers \( r_i \),
• \( M \) contains \((|B| - 1)^a\), that is an address on the tape which is free,
• \( np \) contains a word made of \( m \times k \) "□" symbols is used to "clean" the memory,
• \( r \) encodes the content of the \( k \) registers (but not \( ip \)),
• and \( tp0, tp1 \) are temporary registers.
\( r \) is organized as follows \( \rho(x) = \rho(r_1) \ldots k_i \ldots \rho(r_k) \ldots k_h \) where \( k_i + |\rho(r_i)| = \rho(m) \). To get access to the value of register \( r_i \), we perform the following operations:
\[
\begin{align*}
\text{mov } r \ M; & \quad \text{move } r \text{ in (free) memory} \\
\text{mul } i \ m \ tp0; \text{add } M \ tp0 \ tp0; & \quad \text{computes the address of } r_i \\
\text{load } tp0 \ m \ tp0; & \quad \text{load the content of } r_i \\
\text{mov } np \ M; & \quad \text{clean the memory}
\end{align*}
\]
To push the value stored in \( tp0 \) corresponding to register \( r_i \) into \( r \), we perform:
\[
\begin{align*}
\text{mov } r \ M; & \quad \text{as above} \\
\text{mul } i \ m \ tp1; \text{add } M \ tp1 \ tp1; & \quad \text{push } tp0 \text{ in memory} \\
\text{mov } tp0 \ tp1; & \quad \text{back in the register} \\
\text{mul } m \ k \ tp1; \text{load } M \ tp1 \ r; & \quad \text{clean the memory}
\end{align*}
\]
One may observe that these operations can be done in constant time. The management of \( m, M \) and \( np \) is facilitated by the following observation: the size of the result of each operation \( op(m, n) \) can be easily bounded by \( O(|m| + |n|) \). Augmenting the values of the three registers \( m, M \) and \( np \) accordingly can be done in constant time. Using the program \( 1 \) of Example 2, we can give an initial value to \( M \) and \( np \). It is then routine to write the entire simulation.
IV. SELF-MODIFYING PROGRAMS
**Definition 5.** A pseudo-program \( p \) is said to be self-modifying whenever it is not stable. \( S \) denotes the set of self-modifying pseudo-programs and \( N = B^* \setminus S \) the set of non-self-modifying pseudo-programs, that is of stable programs.
In other words, for a self-modifying program, either the code of the pseudo-program has been modified during the execution, or the instruction pointer goes outside the code. Actually, there are some room for the definition of self-modification. One may argue, solution (1), that modifying the memory within \( \sigma_0(0 \ldots |p| - 1) \) corresponds to self-modification. However, there is no reason to restrict the program to its initial segment: indeed, a program can write a new instruction in another part of the memory, and then jump to this instruction. Solution (1) is too restrictive since it does not deal with some programs which dynamically transform their code. So, one may imagine to extend the scope of the definition to the entire memory. That is solution (2): it corresponds to any program which writes a new instruction in memory. But, again, since there is no clear distinction between data and instructions, it may happen that a bunch of data can be wrongly interpreted as an instructions. And then, solution (2) considers as self-modifying some programs which execute only instructions present at the beginning.
**Example 3.** A short (if not the shortest) self-modifying program is:
\[
\begin{align*}
\text{cpy ip ap;} & \quad \text{gets the address of the current instruction} \\
\text{l_ccat } \text{"stop" } r; & \quad \text{stores the word stop in } r \\
\text{mov r ap;} & \quad \text{rewrites the first instruction} \\
\text{jump ap} & \quad \text{where the second "instruction" use the shorthand notation}
\end{align*}
\]
**Definition 6 (Running programs).** A program \( p \) is said to be running whenever for all computations \( \sigma_0(p, w_1, \ldots, w_k) = (\rho_0, \sigma_0) \rightarrow (\rho_1, \sigma_1) \rightarrow \ldots \rightarrow (\rho_n, \sigma_n) \), the sequence \( \rho_0(ip), \ldots, \rho_n(ip) \) is increasing. A running program never goes back.
Running stable programs are executed in a constant number of steps. Clearly, that subset of programs is not Turing-complete. But the set of self-modifying running programs is Turing-complete. This shows one of the fundamental difference between stable programs and self-modifying programs. To prove our proposition, we compile any stable program into a self-modifying program. Since stable programs are Turing-complete (Proposition 8), the conclusion follows.
Let us consider a stable program \( p \). To avoid technicalities, we suppose that it is written with \( n \) instructions \( I_{1}, \ldots, I_{n} \), using registers \( ip, r_1, \ldots, r_k \). We suppose furthermore that \( ip \) does not appear as the target of some instruction \( op \ r_1 \ r_2 \ r_3 \). We compile it using the same registers with 4 extra-registers: \( m, M, tp0, tp1 \). The principle is to write a program \( p' \) which simulates the instructions of \( p \). Along the computation, the content of the \( r_i \) is equal to the original content, and the memory looks like
\[
\begin{align*}
\text{copies of } p' & \quad \ldots \\
\text{copies of } p' & \quad p' \\
\text{copies of } p' & \quad \sigma' \\
\text{copies of } p' & \quad M \\
\end{align*}
\]
where \( M \) gives the address of \( p, \sigma' \) which correspond to the content of the simulated memory and where \( m \) gives the length of the occupied memory. Initially, \( \rho(M) = |p'| \) and \( m \) is computed by the \( \text{length} \) program.
Each instruction \( i \) of \( p \) is translated to \( c \) instructions (see the translation rules below). Consequently, we have \( |p'| = c|p| \). The number of copies of \( p' \) in memory is given by \( ip \text{ div } |p'| \), and the instruction pointer \( ip \) of \( p' \) corresponds to the execution of the instruction \( (ip \text{ mod } |p'|)/c \) of \( p \). Now,
the rule of the translation are given by:
\[
\begin{align*}
\text{op } r_1 r_2 r_3 & \mapsto \text{op } r_1 r_2 r_3 \\
\text{op'} r_1 r_2 & \mapsto \text{op'} r_1 r_2 \\
\text{op'' } r_1 & \mapsto \text{op'' } r_1 \\
\text{mov } r_1 r_2 & \mapsto \text{add } r_1 M \text{ tp0 } ; \text{mov } \text{tp0 } r_2 \\
\text{load } r_1 r_2 r_3 & \mapsto \text{add } r_1 M \text{tp0 } ; \text{load tp0 } r_2 r_3 \\
\text{stop} & \mapsto \text{stop} \\
\text{jump } r & \mapsto r \text{ _shift_mem } M \text{ m } \text{mul } c \ r \ \text{tp0} ; \text{add } M \text{tp0 } ; \text{add } |p'| \ M ; \text{jump } \text{tp0} \\
\text{test } r q & \mapsto \text{add ip } |p'| \ \text{tp1} ; \text{add c tp1 } \text{tp1} ; \text{mul c q } \text{tp0} ; \text{add M } \text{tp0 } ; \text{r _shift_mem } M m ; \text{add } |p'| \ M ; \text{test } r \ \text{tp0} ; \text{jump } \text{tp1} \\
\end{align*}
\]
- op corresponds to ternary operators, op' to binary operators and op'' are unary operators (not, _ccat, _ccat).
- op m r1 r2 where m is an integer is a shorthand defined as in example 1.
- _shift_mem M m shifts the memory content from |p'| letters using M and m and make a new copy of p' at address M.
- when ip is used as an operand of some instruction, we get its content through the instructions: mod ip |p'| \ tp0 : div \ tp0 \ c \ tp0 and replace ip by tp0.
- the management of m is not shown in the translation, but it is simple: at each step, multiply it by 2.
- to make all translations have exactly c instructions, we pad the shorter ones with dummy instructions _cpy_r_r.
V. COMPUTING NON-SELF-MODIFYING PROGRAMS FROM PSEUDO-PROGRAMS
Now, thinking of self-modifying programs as obfuscated forms of normal programs, one may argue that the meaning of a self-modifying program is its (one of) non-self-modifying form.
Definition 7 (Deobfuscating semantics). A deobfuscating semantics is a function \( \psi : P \rightarrow N \) such that for all \( p \in P \), we have \( \lceil \psi(p) \rceil = \lceil p \rceil \).
To define an effective deobfuscating semantics, we have to show that the set of functions computed by pseudo-programs is Turing complete. There is nothing surprising with that result. However, to keep a constructive approach, and since some part of the definitions are used later on, we provide a complete proof of it.
For that sake, we introduce a slight variant of GOTO-programs as employed by Jones in [6]. We suppose given a finite set of variables \( X_1, \ldots, X_n \), ranging on words. A GOTO-program is then given by a list of instructions \( I_1, I_2, \ldots, I_n \) with instructions being given by:
\[
I := X_i := \text{nil } | X_i := a | X_i := X_j | X_i := \text{l_shift } X_j | X_i := \text{ccat } X_j, X_k | \text{if a goto } \ell | \text{stop}
\]
where \( a \in B \) and \( \ell \in \mathbb{N} \). A configuration is given by a valuation of the variables and the address \( n \in \mathbb{N} \) of the instruction to be executed. To denote a configuration, we use the notation \( (x_1, \ldots, x_n, \ell) \). Here, \( x_i \) is the content of \( X_i \) and \( \ell \) is the current label of the instruction. If \( \ell > n \) or \( \ell = 0 \) or \( \ell \) labels a stop instruction, the machine stops. Otherwise, the semantics of instructions is a binary relation on configurations. Suppose that \( I_\ell = X_i := \text{nil } \mid X_i := a \mid X_i := X_j \mid X_i := \text{l_shift } X_j \mid X_i := \text{ccat } X_j, X_k \), then, \( (x_1, \ldots, x_n, \ell) \rightarrow (x_1, \ldots, x_n, \ell+1) \) where \( x'_i \) is computed from the \( x_i \)'s according to the right value of the expression. For \( I_\ell = \text{if a goto } \ell', \) then \( (a, x_2, \ldots, x_n, \ell) \rightarrow (a, x_2, \ldots, x_n, \ell') \) and otherwise \( (x_1, x_2, \ldots, x_n, \ell) \rightarrow (x_1, x_2, \ldots, x_n, \ell + 1) \).
Proposition 8. Any function computed by a GOTO-program can be computed by a program in \( N \), and consequently by a program in \( P \). Conversely, programs in \( P \) (and \( N \)) can be simulated by GOTO-programs.
Proof: It is done by a direct simulation of instructions.
We use the registers \( r_i \) for the variables \( X_i \) plus two extra registers, np contains the empty word, and \( tp1 \) is a temporary register. There is no explicit register for \( \ell \). Actually \( ip \) will follow the flow of GOTO-instructions. Consider a GOTO instruction \( I \), we associate the following instructions\( \alpha(I) \):
\[
\begin{align*}
X_i := \text{nil} & \mapsto \text{cpy } \text{np } r_i \\
X_i := a & \mapsto \text{cpy } \text{np } r_i ; \text{ l_ccat } a r_i \\
X_i := X_j & \mapsto \text{cpy } r_j r_i \\
X_i := \text{l_shift } X_j & \mapsto \text{l_shift } r_j \text{ tp1 } ; \text{cpy } r_j r_i ; \text{ccat } \text{tp1 } r_j r_j \\
X_i := \text{ccat } X_j X_k & \mapsto \text{ccat } r_j r_k \text{ tp1 } ; \text{cpy } \text{tp1 } r_i \text{stop } \mapsto \text{stop} \\
\text{if a goto } \ell & \mapsto \text{cpy } \text{np } \text{tp1} ; \text{l_ccat } \alpha I_\ell \text{tp1} \\
\text{test } a r_1 \text{tp1} & \mapsto \text{stop}
\end{align*}
\]
where \( \alpha(I) \) is the translation of \( \alpha(\text{if a goto } \ell) \) is an integer which will be instantiated according to the rest of the instructions. Given a Turing Machine \( M = 1 : I_1, \ldots, n : I_n \), we translate it as the concatenation of instructions \( \alpha(I_1) . \alpha(I_2) \cdots \alpha(I_n) \) where the "@I_\ell" appearing in the translation are computed as follows. Let us call \( \alpha(I_\ell) \) the address of instruction \( I_\ell \) in memory, that is the length of the string \( \alpha(I_1) \cdots \alpha(I_{\ell-1}) \). Let us write \( |I| \) the size of its encoding where we dropped the addresses. That is for
instance $| \text{left} \rangle = | \alpha(\text{left}) \rangle$ and $| \text{if a goto } \ell \rangle = | \alpha(\text{if a goto } \ell) \rangle - 1_{\text{ccat}} @ I_\ell tp1$.
The addresses verify:
$$\begin{cases}
\alpha I_1 = 0 \\
\alpha I_{i+1} = \alpha I_i + [I_i] + \mathcal{K} \times |\ell| & \text{if } \ell = \text{if a goto } \ell \\
\alpha I_{i+1} = \alpha I_i + [I_i] & \text{otherwise}
\end{cases}$$
In other words, we have again a fixpoint equation. It can be solved as in example 2.
For the other direction, the rules of the operational semantics show clearly that the successor relation is computable, and then GOTO-computable.
\section{A. N and P are acceptable languages}
\textbf{Proposition 9.} The sets $N$ and $P$ of programs are acceptable languages in the sense of Rogers and Uspenski [7], [8].
\textbf{Proof:} We have seen that both languages $N$ and $P$ are Turing-complete. We need to provide two more constructions, a specializer and a self-interpreter for $N$ and $P$. We recall that the specializer $S_n$ is defined by the equation $\langle S_n(p, x_0) \rangle(x_1, \ldots, x_n) = \langle p \rangle(x_0, \ldots, x_n)$. Referring to the operational semantics, both for $N$ and $P$, we state that $S_n(p, x_0) = p. x_0$ solves the problem.
For the universal function, from Proposition 8, we know that there is a GOTO-program $M$ such that computing $M$ on $(p, x_1, \ldots, x_n)$ we get $\langle p \rangle(x_1, \ldots, x_n)$. Translating this machine back (with the same Proposition 8), we get a program $I_M$ such that $\langle I_M \rangle \langle p, x_1, \ldots, x_n \rangle = \langle p \rangle(x_1, \ldots, x_n)$.
\textbf{Remark 10.} From its definition, it is straightforward (but it must be observed) that the specializer $S_n$ is efficient: that is, $\text{time}(S_n(p, x_0)(x_1, \ldots, x_n)) = \text{time}(p(x_0, \ldots, x_n))$.
We end this part with Kleene’s recursion theorem. In [9], we have shown its central role in computer virology. The theorem can be used as a compiler for viruses. In particular, we provided a classification of viruses by means of a stratification of some variants of the Theorem [10].
\textbf{Theorem 11.} [Kleene’s fixpoint] Given a computable function $g : B^* \times B^* \rightarrow B^*$, there is a program $e$ in $N$ such that $\langle e \rangle(x) = g(e, x)$.
\textbf{Proof:} Suppose that $g$ is a program for the function $g$. The pseudo-program $p_{i_2}$, by scanning the memory, pushes the first argument in register out and let the memory unchanged. $p_{i_2}$ is the second projection, it pushes the second argument in register out. Finally, we suppose that $\text{clean } p$ cleans the memory from the address stored in $p$. The function $x, y \mapsto g(S_1(x, x), y)$ is then computed by the following program:
$$q \triangleq 1_{\text{ccat}} k. p.//\text{the length of } q \begin{array}{c}
p_{1,2} : \text{cpy out } tp0; \\
p_{2,2} : \text{cpy out } tp1; \\
c \text{clean } p; \text{ccat } \square tp0; \\
\text{ccat } tp0 tp0 tp0; \\
\text{ccat } tp0 tp1 tp1; \\
\text{mov } tp1 p; \text{g}
\end{array}$$
where $k$ is the length of $q^3$. Defining $e = S_1(q, q) = q \Box q$, we have the equalities:
$$\begin{align*}
\langle e \rangle(x) & = \langle S_1(q, q) \rangle(x) \\
& = \langle q \rangle(q, x) \\
& = g(S_1(q, q), x) = g(e, x)
\end{align*}$$
\textbf{B. Semantics by deobfuscation}
\textbf{Proposition 12.} The set $S$ is $\Sigma_1$-complete.
\textbf{Proof:} The formulation of Definition 5 shows that $S$ is $\Sigma_1$. We show that it is actually complete. Take a TM $M$ and call $\alpha(M)$ its translation according to Proposition 8 where one transforms the translation of STOP to
$$\begin{array}{c}
\text{stop} \rightarrow \text{cpy } ip \text{ tp1 } \text{.cpy np tp0 }.
\text{1_ccat } \text{"stop" } \text{tp0 }.
\text{mov } \text{tp0 tp1 .jump tp1}
\end{array}$$
If the machine halts, one of the instructions stop is executed. The instruction mov tp0 tp1 writes a stop instruction, and then we jump to this instruction. Consequently, the program is self-modifying. For a non-halting machine, as we have seen, the simulation is performed by a non-self-modifying program. Then, the machine halts if its translation is in $S$, we get the desired result.
This result is important in our quest of deobfuscation as a semantics. Let us call $\psi : P \rightarrow N$, the deobfuscation semantics we are looking for. It is natural to say that the semantics of a non-self-modifying program is the program itself (since it is not obfuscated!). To sum up, we are looking for a function $\psi$ such that:
\begin{enumerate}
\item $\langle \psi(p) \rangle(x) = \langle p \rangle(x)$,
\item for $p \in N$, $\psi(p) = p$.
\end{enumerate}
Unfortunately, there is no such computable function. This is a corollary of Proposition 12. We prove it ad absurdum. Suppose that $\psi$ is constructive. Then, we have $p \in S \Leftrightarrow \psi(p) \neq p$. Indeed, if $p \in S$, then $\psi(p) \in N$ implies that $\psi(p) \neq p$. Otherwise, $p = \psi(p)$ by the requirement on $\psi$.
Consequently, the price of the effectiveness of the deobfuscation function is to have a less precise deobfuscation
Theorem 13 (Rogers [8]). There is a computable isomorphism between any two acceptable languages, that is between two Turing complete programming languages equipped with a specializer and a universal function.
As a corollary, since both $N$ and $P$ are acceptable languages, there is a computable procedure which sends any program $p \in P$ to some program in $N$. This isomorphism actually defines a deobfuscating semantics as mentioned in the beginning of the Section.
This, up to our knowledge, an original use of this theoretical result as a tool to deobfuscate programs. However, Rogers’s construction has some drawbacks. First, whenever the procedure is effective, we have no ideas of its complexity. Second, and toughest point, this (de-)obfuscating semantics does not provide a link between the complexity of the obfuscated form of a program and its deobfuscated one. In particular, it could happen that the computations of the deobfuscated form of a program takes much more time than its obfuscated form. This goes clearly against the intuition of (de-)obfuscation. One of the requirements of Rogers construction is that the morphism is actually bijective. This feature is meaningless in the present setting, where we only need a compilation procedure (neither necessarily injective, nor surjective).
Theorem 14. There is a compilation procedure $\pi : P \to N$ such that:
- $\pi$ is computable in polynomial time,
- for each program $p \in P$, we have $\text{time}(\pi(p)) = O(\text{time}(p))$.
Proof: We use the first Futamura projection [11]. This construction is quite analogous to the use of a virtual machine that we use daily to analyse a malware in a safe environment.
Consider that we have a (relatively) efficient interpreter $I_N^P$ of $P$ programs (written in $N$), that is the program $I_N^P$ verifies $\|I_N^P\|:\text{time}(I_N^P(p,d))=O(\text{time}(p(d)))$. Then, the following procedure solves the problem: $\pi : p \mapsto S_1(I_N^P,p)$. First, it can be computed in polynomial time. Indeed, $I_N^P$ is a constant parameter and the definition of $S_1$ shows it is computable in polynomial time. Second, the compilation is correct:
$$\|S_1(I_N^P,p)\|(d) = \|I_N^P\|(p,d) = \|p\|(d)$$
And third, for the time complexity of the program $p$, we have the equations:
$$\text{time}(S_2(I_N^P,p)(d)) = \text{time}(I_N^P(p,d))$$
So, the last point of the proof is to show the existence of such an interpreter. Given a pseudo-program with registers $r_1,\ldots,r_n$, we simulate it, using registers $r'_1,\ldots,r'_n$ plus some extra registers $ip, tp0, tp1, K$. The interpreter $I$ is designed as:
```
ccatl k K //the length of instructions
ccatl k1 tp1 //tp1 points to cpy tp2 ip'
load ip' K tp0 //load next instruction
switch tp0 with
case load r1 r2 r3 -> load r'1 r'2 r'3
case mov r1 r2 -> mov r'1 r'2
case op r1 r2 r3 -> op r'1 r'2 r'3
case shiftl r1 r2 -> shiftl r'1 r'2
case shiftr r1 r2 -> shiftr r'1 r'2
case test r1 r2 -> cpy r'2 tp2.
test r'1 tp1
case jump r -> cpy r' tp2. jump tp1
case stop -> stop
end_switch
add K ip ip.
jump k2
cpy tp2 ip'. jump k2
```
where $k_1$ corresponds to the address of the instruction cpy tp2 ip’ and $k_2$ to the instruction load ip’ K tp0. The switch construct is defined as follows.
```
switch tp1 with
case w1 -> e1 test_w1 tp1 m2.e1.jump k
case w2 -> e2 test_w2 tp1 k2.e2.jump k
case w3 -> e3 test_w3 tp1 k3.e3.jump k
```
where $\text{mi}$ with $2 \leq i \leq n$ points to the instruction corresponding to the $i$-th test and $k$ points to the address at the end of the construction.
It is clear that $I$ is a non-self-modifying program. From the construction, one may observe that each instruction is simulated by a finite number of instructions. Consequently, the time loss of our simulation is constant for each instruction, more precisely, we have $\text{time}(I(p,d)) = O(\text{time}(p(d)))$. And lastly, the program above is actually stable. We have seen in Section III that for these programs, we could have an encoding of registers at a constant cost. It is then routine to encode the interpreter with the right number of registers.
VI. CONCLUSION
We have proposed a computational model which is close to the functioning of real machines. The program is loaded in memory, and can be changed along the computations.
This paper rises some open questions. First, we think that the notion of self-modification should be refined: one way is to use [4], where the authors show that typing can be used to characterize pseudo-programs. Secondly, we have shown that there are no deobfuscating procedure keeping the stable
programs constant. Linked to this question, can we find some tools to approximate both sets $N$ and $S$, from above, or from below? Such techniques find an immediate application in the verification of the security of a computer systems. Finally, writing self-modifying program is difficult. We think that Kleene’s recursion theorem is a major tool to build them. Indeed, fixpoint programs have access to their own code and, consequently, can manipulate it. Finally, running programs can be seen as kind of generalized traces.
REFERENCES
APPENDIX
Referring to example 2, there is one issue with the $k_1$’s which we discuss now. One may observe that they are defined by means of themselves. Indeed, consider for instance $k_1$. The length of the macro instruction $l\_ccat$ $k_1$ $x$ depends on $|k_1|$, that is on $k_1$. So, the length of the program depends on $k_1$. But $k_1$ is defined as the length of the program plus one!
To solve this, we use a fixpoint equation. Referring to the definition of $l\_ccat$ $n$ $tp$, the size of these instructions is $K \times |n|$ where $K$ has been defined as the length of instructions (see Section II). Let
us introduce $\alpha = [l\_ccat 1 \ ap ]$, $\beta = |\alpha | + [load \ r \ ap \ p test \ p \ s.add \ ap \ out \ add \ ap \ r \ jump \ b ]$ and $\gamma = \beta + |stop | + 1$. Consider now the functions:
$$f_1(x_1, x_2, x_3) = (|x_1| + |x_2| + |x_3|) \times K + \gamma$$
$$f_2(x_1, x_2, x_3) = (|x_1| + |x_2| + |x_3|) \times K + \beta$$
$$f_3(x_1, x_2, x_3) = (|x_1| + |x_2| + |x_3|) \times K + \alpha$$
$$f(x_1, x_2, x_3) = (f_1(x_1, x_2, x_3), f_2(x_1, x_2, x_3), f_3(x_1, x_2, x_3))$$
One may observe that $(k_1, k_2, k_3)$ is a fixpoint for the function $f$. To compute it, we use the algorithm:
```
Addr = [1,1,1]
Addr’ = Addr
while(Addr’ != Addr) do
Addr = Addr’
Addr’ = f(Addr) //with f defined above
od
return Addr
```
If the algorithm ends, then, the result is a fixpoint. Let us prove that the algorithm terminates. One may observe that $f$ is contracting for sufficiently large values. Indeed, let us write $\delta(m, n) = max(m - n, n - m)$. Whatever $c > 0$ is, by intermediate value theorem, for all $x, y > \frac{1}{c \times \ln(|B| - 1)}$, one has $\delta(|x|, |y|) = \delta(\log_{|B| - 1}(x), \log_{|B| - 1}(y)) \leq \frac{1}{c} \times \delta(x, y)$. We introduce the distance $\delta((x_1, x_2, x_3), (y_1, y_2, y_3)) = \delta(x_1, y_1) + \delta(x_2, y_2) + \delta(x_3, y_3)$. Take $c = \frac{1}{6 \times K}$, we compute
$$\delta(f(x_1, x_2, x_3), f(y_1, y_2, y_3)) =$$
$$3 \times K \times \delta(|x_1| + |x_2| + |x_3|, |y_1| + |y_2| + |y_3|) \leq 3 \times K \times (\delta(|x_1|, |y_1|) + \delta(|x_2|, |y_2|) + \delta(|x_3|, |y_3|)) \leq 3 \times K \times \frac{1}{c} \times \delta((x_1, x_2, x_3), (y_1, y_2, y_3)) \leq \frac{1}{2} \delta((x_1, x_2, x_3), (y_1, y_2, y_3))$$
As a consequence, the algorithm converges. Moreover, we can state that the convergence is geometric wrt the input. Consequently the compilation procedure can be performed in polynomial time.
|
{"Source-Url": "https://indefinitestudies.files.wordpress.com/2008/08/automodificateur.pdf", "len_cl100k_base": 14043, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 44025, "total-output-tokens": 15692, "length": "2e13", "weborganizer": {"__label__adult": 0.00040531158447265625, "__label__art_design": 0.0003886222839355469, "__label__crime_law": 0.0005612373352050781, "__label__education_jobs": 0.0006618499755859375, "__label__entertainment": 7.838010787963867e-05, "__label__fashion_beauty": 0.00019121170043945312, "__label__finance_business": 0.00028014183044433594, "__label__food_dining": 0.0004897117614746094, "__label__games": 0.0008807182312011719, "__label__hardware": 0.0020751953125, "__label__health": 0.0007715225219726562, "__label__history": 0.0003046989440917969, "__label__home_hobbies": 0.00017654895782470703, "__label__industrial": 0.0006551742553710938, "__label__literature": 0.0004317760467529297, "__label__politics": 0.00033020973205566406, "__label__religion": 0.0006184577941894531, "__label__science_tech": 0.09759521484375, "__label__social_life": 8.475780487060547e-05, "__label__software": 0.00818634033203125, "__label__software_dev": 0.8837890625, "__label__sports_fitness": 0.0002872943878173828, "__label__transportation": 0.0006470680236816406, "__label__travel": 0.00019049644470214844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47776, 0.01702]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47776, 0.6299]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47776, 0.81762]], "google_gemma-3-12b-it_contains_pii": [[0, 5078, false], [5078, 11414, null], [11414, 17162, null], [17162, 21843, null], [21843, 27707, null], [27707, 33499, null], [33499, 38694, null], [38694, 43323, null], [43323, 47776, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5078, true], [5078, 11414, null], [11414, 17162, null], [17162, 21843, null], [21843, 27707, null], [27707, 33499, null], [33499, 38694, null], [38694, 43323, null], [43323, 47776, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47776, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47776, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47776, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47776, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47776, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47776, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47776, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47776, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47776, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47776, null]], "pdf_page_numbers": [[0, 5078, 1], [5078, 11414, 2], [11414, 17162, 3], [17162, 21843, 4], [21843, 27707, 5], [27707, 33499, 6], [33499, 38694, 7], [38694, 43323, 8], [43323, 47776, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47776, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
3c499c147255d1630695582b686e5ec4429f2257
|
BitstreamFormat Renovation
This page proposes a set of changes to improve the representation of file formats in the content model, in order to better support preservation activities. The design applies to DSpace 1.5, or later. This work is to be done as part of the FACADE project but it is designed and intended as a generally useful extension to the platform.
Please use the Discussion tab for your comments on and reactions to this proposal, since comments mingled with this much text would be too hard to find.
See Also these related wiki pages:
- About Data Formats - background and philosophy
- BitstreamFormat Conversion Instructions - how to convert a DSpace archive
- BitstreamFormat Workbench - manual for the command-line admin utility
- Paper presented at OR08: BitstreamFormat+Renovation_DSparse Gets Real Technical Metadata
- Slides from BSF Renovation presentation at Open Repositories 2008, Southampton, UK
Downloads:
Download Patch to DSpace 1.5 Source
Contents
1 Introduction
1.1 Purpose and Principles
2 Use Cases
3 Overview of Changes to DSpace Core and API
3.1 Content Model
3.1.1 Bitstream
3.1.2 BitstreamFormat
3.2 Core API
3.2.1 External Format Registry
3.2.2 Automatic Format Identification
3.2.3 Administration
4 Changes to Content Model
4.1 Exceptions
4.1.1 FormatRegistryException
4.1.1.1 FormatRegistryNotFoundException
4.2 FormatIdentifierException
4.3 FormatIdentifierTemporaryException
4.4 Bitstream
4.4.1 FormatConfidence
4.5 BitstreamFormat
4.5.1 Rationalization of Usage
4.5.2 Formal Definition
4.5.3 External Format Identifiers
4.5.3.1 The Namespace
4.5.3.2 Initial Set of Well-Known Namespaces
4.5.3.3 MIME Types excluded
4.5.3.4 Apple UTIs
4.5.4 Removing the "Internal" Flag
4.5.5 BitstreamFormat Properties
4.5.6 BitstreamFormat API
4.5.6.1 Add the following methods:
4.5.6.2 Remove these methods:
4.5.6.3 Existing Methods that Remain:
4.6 FormatRegistryManager
4.6.1 API
4.7 FormatRegistry
4.7.1 Format Registries are Tightly Coupled
4.7.2 Format Identification is Separate
4.7.3 FormatRegistry API
4.7.3.1 Registry Name
4.7.3.2 The Import Operation
4.7.3.3 The Update Operation
4.7.3.4 The ConformsTo Operation
4.8 The DSpace-Internal "Registry"
4.9 The Built-In Format Registries
4.9.1 "DSpace" (Traditional) Format Registry
4.9.2 "Provisional" Format Registry
4.10 Database Schema Changes
5 Automatic Format Identification
5.1 Format Identification Framework
5.1.1 The FormatHit Object
5.1.2 Confidence of Format Identification Hits
5.1.3 FormatIdentifier Interface
5.1.4 FormatIdentifierManager
5.1.5 Controlling the Format Identification Process
5.1.6 Implementing Automatic Format Identification
5.1.6.1 Random Access to Bitstream Contents
5.1.6.2 Format-Hit Comparison Algorithm
5.1.6.3 Default Recommended Format Identification Algorithm
5.1.6.4 Applying the results to a Bitstream
5.1.6.5 Implementing a FormatIdentifier plugin
5.1.6.6 Handling Conflicts
5.2 User Interface Elements Related to Data Formats
5.2.1 Display
5.2.2 Choosing A Format
5.2.2.1 Choosing Formats from an External Registry
5.2.3 Choosing / Confirming Automatic Identification
5.2.4 Editing BitstreamFormat Table
6 Administrative Operations Relating to BitstreamFormat(s)
6.1 Initial Setup
6.1.1 Configuration
6.1.2 Conversion from Previous DSpace Version
6.2 Reports
6.2.1 Format Identification Testing: Success Rate
6.2.2 Histogram of Formats In Use
6.3 Maintenance
6.3.1 Edit BitstreamFormat Metadata
6.3.1.1 Update BSFs From Remote Data Format Registries
6.3.2 Edit Bitstream Technical Metadata
6.3.2.1 Retry Format Identification
6.3.2.2 Override Format Choice
6.3.3 Maintain "DSpace" and "Provisional" Format Registries
6.3.4 Remove an External Format Registry
7 Omissions and Future Work
7.1 Your Comments
7.2 Preservation Applications
7.3 Container Formats
7.4 Documentation of Data Formats
8 Other Documentation
Introduction
In Automatic Format Identification using PRONOM and DROID, Adrian Brown defines a "data format" as:
The internal structure and encoding of a digital object, which allows it to be processed, or to be rendered in human-accessible form.
Note that this implies more than just knowing the common name of a Bitstream's format, e.g. "Adobe PDF". That name actually describes a family of formats. In order to know exactly how to recover the intelligence in a particular Bitstream, you'd want to know which specific version of PDF it is: later versions have features not found in earlier ones. The "internal structure and encoding" imposed by a data format is usually defined in exacting detail by a format specification document, and/or by the software applications that produce and consume that format.
For additional, extensive, background, see About Data Formats, which serves as a manifesto of sorts for this project.
Overall, this proposal affects:
- The way data formats are represented in the DSpace content model.
- Clarification and rationalization of the use of BitstreamFormat.
- Mechanisms for identifying the data format of a Bitstream.
- Integration of standards-based technical metadata about data formats which can be effectively shared with other applications.
The original DSpace design intentionally avoided the issue of describing data formats in such detail because there were already other efforts underway to thoroughly catalog data formats – and DSpace would eventually leverage their work. As of June, 2007, the most sophisticated data format registries are still in development, but some usable systems are operating in production. We propose to integrate external data format intelligence through a flexible plugin-based architecture to take advantage of what is currently available but leave a clear path for future upgrades and changes. It also lets each DSpace installation choose an appropriate level of complexity and detail in their format support.
Purpose and Principles
- Enable accurate, meaningful, fine-grained, and globally-understood identification of a Bitstream's data format.
- Maintain backward compatibility with most existing code, and existing archives.
• Introduce the binding of persistent, externally-assigned data format identifiers to BitstreamFormats.
• Integrate tightly with "standard" data format registries, using a plugin framework for flexible configuration:
• Anticipate that the Global Digital Format Registry (GDFR) will be the registry of choice, but allow free choice of other metadata sources.
• Recognize references to entries in "standard" data format registries in ingested content (e.g. technical MD in SIPs) to facilitate exchange of SIPs and DIPs.
• The DSpace data model directly includes only the subset of format metadata it has an immediate use for, and references entries in an external format registry for the rest.
• Refer to formats by the external data format registry's identifiers so format technical metadata is recognized outside of DSpace.
• Improve the automatic identification of data formats in batch and non-interactive content ingestion.
• Help interactive users identify formats easily and with accuracy during interactive submission.
• Rationalize use of BitstreamFormat object:
• Eliminate the overloaded use of the "License" format and "Internal" flag in BitstreamFormats to mark and hide deposit license bitstreams.
• Attempt to accurately describe the data format of Bitstream, even the ones created for internal use.
• Create pluggable interface to external data format registries, to encourage experimentation and track developments in this highly active field.
• Add a separate pluggable format-identification interface to allow a "stack" of methods to identify the format of a Bitstream by various techniques.
Use Cases
See BitstreamFormat Renovation for the sketches of the anticipated use cases that drove this design. The text grew too large for one page.
Overview of Changes to DSpace Core and API
Here is a summary of all of the proposed changes, by section.
Content Model
Bitstream
Each Bitstream still refers to a BitstreamFormat object to identify its data format. In addition, the Bitstream gains two new properties:
1. A format confidence metric, which indicates (on a coarse symbolic scale) the certainty of the identification of its format, reflecting both accuracy and precision.
2. The source of the format identification, indicating the tool or mechanism responsible for the format technical metadata in this Bitstream.
BitstreamFormat
Although outwardly similar and largely backward-compatible, the BitstreamFormat has been completely gutted and re-implemented. It now serves as a local "cache" of format technical metadata and holds one or more external format identifiers, each of which refers to a complete technical metadata record in an external data format registry.
These external registries (as described below) are the authoritative source of format technical and descriptive metadata about data formats. This fundamental change lets DSpace take advantage of the extensive work and recognized standards offered by external format registries such as PRONOM and the GDFR.
Core API
External Format Registry
We add a plugin interface to provide access to external data format registries. Each registry is modeled as an implementation of the FormatRegistry interface. It is fairly simple; it only supports "importing" a format description into the local BitstreamFormat Cache, updating an existing format, and a few queries.
A single DSpace archive may be configured to access many external format registries. It usually will be, since no one registry currently has all the answers.
Backward-compatibility is provided by a built-in "DSpace" registry which contains all of the DSpace-1.4 formats.
Automatic Format Identification
The old org.dspace.content.FormatIdentifier is replaced by a configurable, extensible, plugin-based format identification framework. It is not part of the format registry plugin, because while some format recognition services live in a registry's software suite, others are independent of any registry.
Format identification is one of the most important improvements in this project. It is explained in complete detail below.
Administration
There are additions to the configuration and administration APIs to support configuration and maintenance of the new registry and format-identification frameworks.
Changes to Content Model
Here are detailed explanations of the planned changes to content model classes.
Exceptions
The following exceptions are thrown when a fatal error is encountered in the format registry and identification framework. They are similar in meaning to existing exceptions in the DSpace API, such as AuthorizeException – signalling a fatal error with enough context and explanation to communicate the cause to the user or administrator.
FormatRegistryException
Sent when there is a fatal error while accessing an external format registry or updating the local cache of format metadata in the DSpace RDBMS. Can be caused by incorrect or missing configuration entries, network problems, filesystem problems, etc.
FormatRegistryNotFoundException
A subclass of FormatRegistryException, this exception is sent in particular cases when looking up an external identifier fails although it should have been found (e.g. since it had been found before). In the common case of looking up an identifier for the first time, e.g. through BitstreamFormat.findByIdentifier(), no exception gets thrown because failure is a possibly-expected result.
FormatIdentifierException
Thrown when a format identification method encounters a fatal error which would cause it to return a false negative result. For example, if its configuration is missing or incorrect, the method throws this exception rather than silently failing. Simply failing to identify a format is in the realm of expected results and does not cause an exception.
FormatIdentifierTemporaryException
Thrown by the format identification method when it fails because of a "temporary" problem, e.g. when a network resource is not available. This subclass of FormatIdentifierException tells the identifier manager that it may succeed when retried later.
Bitstream
The most significant change is that the Bitstream now remembers the confidence of its format identification, an enumerated value which indicates the certainty and source of its format identification. There is also a convenience method to access the automatic format identification: since it almost always used to set the Bitstream's format anyway, this improves code clarity.
Here is an interface view of the API additions:
FormatConfidence
// Ordered symbolic values of format-identification confidence:
// (See Automatic Format Identification section for details)
package org.dspace.content;
public enum FormatConfidence
{
// No format was identified. The unknown format is assigned.
UNIDENTIFIED,
// A format was found but it was based on "circumstantial evidence", i.e. external properties of the Bitstream such as its name or a MIME type attached to it on ingest.
CIRCUMSTANTIAL,
// The data format was determined by coarse examination of the contents of the Bitstream and comparison against the known characteristics of generic formats such as plain text, or comma-separated-values files.
HEURISTIC,
// The format was identified by matching its content positively against an internal signature that describes a "generic" (supertype) format or family of formats.
POSITIVE GENERIC,
// The format was identified by matching its content positively against an internal signature that describes a specific, precise, data format.
POSITIVE SPECIFIC,
// Contents of Bitstream are validated as conforming to the identified format, this is the highest confidence reached by automatic identification.
VALIDATED,
// Format is derived from reliable technical metadata supplied at the time the Bitstream was ingested; if applied it is given a priority that overrides automatic identifications. Ingest-derived formats with a low level of confidence should be assigned CIRCUMSTANTIAL.
INGEST,
// Format was identified interactively by a user, which acts as an override of automatic format identification.
MANUAL
}
// ONLY includes new methods
class Bitstream extends DSpaceObject
{
// Returns the "confidence" recorded when the format of this Bitstream was identified.
public FormatConfidence getFormatConfidence()
{
// Sets the value returned by getFormatConfidence()
public void setFormatConfidence(FormatConfidence value)
}
// Returns the source that identified the format of this Bitstream.
public String getFormatSource()
{
// Sets the value returned by getFormatSource()
public void setFormatSource(String value)
}
}
BitstreamFormat
The BitstreamFormat class, which we will abbreviate as BSF, is essentially gutted and replaced with a new implementation. As described above, it now serves as a local “cache” of technical metadata that comes from external data format registries. Every BSF is bound to at least one format identifier in an external registry so its format technical metadata can be expressed in a way that is recognized outside of DSpace.
Rationalization of Usage
In the current (version 1.4.x) codebase, the BitstreamFormat object has acquired uses and meanings beyond simply describing a Bitstream’s data format – but these interfere with intended purpose in preservation activities. For example, the BSF has an internal flag which directs the UI to hide Bitstreams of that format from casual view. In an unmodified DSpace installation, the "License" BSF is the only one for which internal is true, to keep deposit license files from appearing in the Web UI. Unfortunately, this usage cripples "License" as an actual format descriptor, since it gets applied to all sorts of Bitstreams that contain licensing information no matter what their actual format. XML, plain-text, and RDF files are all tagged with the "License" BSF to make them disappear, yet it says nothing about the format of their contents.
We rationalize the function of the BSF so that it only describes the data format of a Bitstream’s contents. There are other ways to mark Bitstreams as "internal", e.g. Bundle membership, which is a better fit with the semantics of the content model anyway.
Formal Definition
This is the new formal definition of a BitstreamFormat:
- Each BSF represents a description of a single, unique data format; there is exactly one BSF for each distinct data format referenced by Bitstreams in the DSpace archive.
- A BSF is bound to one or more entries in external data format registries.** The identifiers are logically all peers, although the metadata cached in the BSF is only imported (or updated) from one of them.
- All external format identifiers which describe the equivalent format must be bound to the same DSpace BSF – in other words, there should never be two BSFs describing the same conceptual format, such as "PDF Version 1.2": one BSF encompasses all synonym external identifiers.
- The BSF's function is to describe the data format of the contents of a Bitstream, and nothing more.
- Application code must not "overload" a BSF with additional implicit meanings, such as marking Bitstreams invisible in a UI or indicating a function such as the deposit license.
- One special BSF, the unknown format, represents the unknown or unidentified data format.
- Every Bitstream refers to exactly one BSF:
- If its format has not been assigned or identified, it is the unknown format.
- This allows an application to assume every Bitstream has a valid BSF with all of its attendant properties, so e.g. it can get a valid MIME type.
In the content model, a BitstreamFormat aggregates all the format-related technical metadata for Bitstreams of its type. Not only does this save space, it lets an administrator make changes and adjustments to that metadata easily in one place.
It also holds DSpace-specific format metadata, which currently consists only of the one administrative metadata item, the "support-level", which controls the preservation support policy for all Bitstreams of that format.
This new implementation makes the BitstreamFormat a local cache for the relevant format metadata, but mainly it acts as a reference to the full technical metadata found in one or more external format registries (like the GDFR). It only caches the metadata immediately needed by DSpace, such as MIME type, name, description. This is adequate for everyday operation of the archive; DSpace never has to go to the external format registry for metadata.
This organization makes it much easier to take advantage of developments in the rapidly evolving field of data format technical metadata. Instead of trying to import all of the various schemas and data models of every external data format registry, we can just maintain a reference into the external registry.
**External Format Identifiers**
External data format identifiers are introduced in this architecture. They link DSpace's internal BSF objects to entries in external format registries such as the GDFR. Format identifiers consist of a namespace name, and the identifier, an arbitrary string. They possess these properties:
*Globally unique: each pair of namespace and identifier (except for the "Provisional" namespace) is unique in the world.
*Persistent: Identifiers in external format registries are expected to be persistent, that is, bound forever to the same format description.
*Resolvable: There must exist some automated means to retrieve the metadata bound to an external identifier.
- It may require special software and local data files or contacting network resources.
- Cardinality: A single external identifier may only belong to one BSF, but a BSF may be bound to multiple format identifiers.
- Required. A BSF must be bound to at least one external identifier.
External identifiers are the key to naming data formats in a way that can be recognized outside of the DSpace system; they allow the BSF to be meaningfully crosswalked to and from technical metadata schemas such as PREMIS.
**The Namespace**
We add a DSpace-specific namespace to the external format identifier to:
1. Mark which external registry it belongs to
2. Ensure every external identifier is unique even if two registries have identical identifiers
The namespace is a short string belonging to a controlled vocabulary (represented by a table in the DSpace configuration or the RDBMS). Each namespace describes a source of data format information.
**Initial Set of Well-Known Namespaces**
Here are the suggested initial namespace names for existing registries. This is not an exhaustive set; since the registry is a plugin, a new registry can always be added as a DSpace add-on.
The DSpace-Internal, DSpace, and Provisional registries are implemented in the core. We expect a PRONOM registry plugin to be available with the initial reference implementation.
<table>
<thead>
<tr>
<th>Namespace</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>DSpace-Internal</td>
<td>Contains only the Unknown identifier, a placeholder for the unknown format which represents an unidentified Bitstream format. This is the only mandatory namespace which is automatically configured.</td>
</tr>
<tr>
<td>DSpace</td>
<td>Contains most of the original generic formats defined by DSpace, for backward-compatibility and for archives which do not care about precise data format descriptions.</td>
</tr>
</tbody>
</table>
Provisional
For custom data formats local to the archive. Provisional extensions to the "DSpace" format registry are put in their own namespace so there is no chance of a conflict with formats added later to "DSpace", and also to make their status as local extensions obvious.
PRONOM
PUIs from PRONOM's format registry
GDFR
Persistent identifiers from the Global Distributed Format Registry.
LOC
Library of Congress Sustainability of Digital Formats project format descriptions.
The standard namespace values are available as public static fields on the FormatRegistryManager class. The LOC namespace is not really a registry yet but it makes sense to reserve the namespace since it is a significant source of format technical metadata.
MIME Types excluded
Note that MIME types cannot be BSF identifiers because they violate the rule that only one BSF may be bound to each identifier. MIME types are imprecise, many BSFs have the same MIME type; e.g. a lot of XML-based are tagged "text/xml".
Apple UTIs
Apple Computer has developed what is essentially an alternative to MIME types called Uniform Type Identifiers (UTIs). It is an interesting development, although not directly relevant. Although the UTI database is, in a sense, a registry of format identifiers, it is not a good candidate for use in DSpace for several reasons:
- No accompanying metadata of any sort, it is just an identifier space.
- Ad-hoc extensions by vendors, no online central registry.
- Varying levels of granularity, since it is primarily intended to match files to application programs.
Removing the "Internal" Flag
This renovation removes BitstreamFormat’s "internal" flag, which was originally intended to guide UI applications in hiding certain classifications of Bitstreams.
Was the "Internal" flag ever truly necessary? Apparently it was only ever used to make the Bitstreams containing deposit-license and Creative Commons license invisible in the Web UI.
Its presence is actually harmful: not only does it have nothing to do with describing the format of the data, it actually encouraged usage that obscures the data format. The one "License" BSF was applied to all Bitstreams containing an Item’s deposit license and Creative Commons licenses, no matter what their actual data formats. The Creative Commons license consists of three Bitstreams of distinct actual formats – e.g. one is RDF. It is misnamed with the "License" format so it will not be properly preserved.
In the DSpace@MIT registry, I have determined that "License" is the only BitstreamFormat for which "internal" is true, and that Bitstreams whose format is "License" appear only in Bundles named "LICENSE" and "CC_LICENSE". Therefore, we can determine the internal-ness (i.e. invisibility) of a Bitstream by its its owning Bundle as accurately as by the bogus BSF. It makes no practical difference which technique is used, but the Bundle-name cue is a better fit with the current content model. It works just as well under the DSpace+2.0 content model, since bundles evolve into "manifestations".
By modifying UI code to judge "visibility" of a Bitstream by the Bundle it belongs to rather than a cryptic property of its format, we can get rid of the "internal" bit without any user-visible changes. The content model and API are improved, since license Bitstreams may now have meaningful data formats assigned so they can be preserved and disseminated correctly.
BitstreamFormat Properties
Here is a list of all the BSF’s properties, i.e. fields of the object. Source is where the property originally comes from; Mod means whether or not it may be changed by the archive administrator.
BitstreamFormat Properties
<table>
<thead>
<tr>
<th>Property</th>
<th>Source</th>
<th>Mod</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Name</td>
<td>Registry, Can</td>
<td>Yes</td>
<td>Brief, human-readable description of this format, for listing it in menus.</td>
</tr>
<tr>
<td></td>
<td>override</td>
<td></td>
<td>Used to be "short description".</td>
</tr>
<tr>
<td>Description</td>
<td>Registry, Can</td>
<td>Yes</td>
<td>Detailed human-readable explanation of the format including its unique aspects.</td>
</tr>
<tr>
<td></td>
<td>override</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Identifier</td>
<td>Registry</td>
<td>No</td>
<td>List of all namespaced identifiers linking this BSF to an entry in an established data format registry. A BSF must have at least one identifier. This list is ordered; the first member names the external registry entry that was originally imported to create this BSF.</td>
</tr>
<tr>
<td>Support-level</td>
<td>User-entered</td>
<td>Yes</td>
<td>Encoding of the local DSpace archive’s policy regarding preservation of Bitstreams encoded in this format. Value must be one of: 1. Unset - Policy not yet initialized, flags format entries that need attention from the DSpace administrator. 2. Unrecognized - Format cannot be identified. 3. Known - Format was identified but preservation services are not promised. 4. Supported - Bitstream will be preserved.</td>
</tr>
<tr>
<td>MIME-type</td>
<td>Registry, Can</td>
<td>Yes</td>
<td>Canonical MIME type (Internet data type) that describes this format. This is where the Content-Type header’s value comes from, when delivering a Bitstream by HTTP.</td>
</tr>
<tr>
<td></td>
<td>override</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Extension</td>
<td>Registry, Can</td>
<td>Yes</td>
<td>The canonical filename extension to apply to unnamed Bitstreams when delivering content over HTTP and in DIDs. (NOTE: Some format registries have a list of filename extensions is, used to help identify formats, but we only need the canonical extension in the BSF model.</td>
</tr>
<tr>
<td></td>
<td>override</td>
<td></td>
<td></td>
</tr>
<tr>
<td>LastUpdated</td>
<td>System</td>
<td>No</td>
<td>Timestamp when this BSF was imported or last updated from its home registry.</td>
</tr>
</tbody>
</table>
BitstreamFormat API
Here is the new API for BitstreamFormat.
*Add the following methods:*
Remove these methods:
findByShortDescription()
findByMIMEType()
findNonInternal()
isInternal()
setInternal()
getShortDescription() // renamed to getName()
setShortDescription() // renamed to setName()
getExtensions()
setExtensions()
Existing Methods that Remain:
These existing methods are retained, just mentioned here for completeness.
static BitstreamFormat create(Context context);
void delete();
static BitstreamFormat find(Context context, int id);
static BitstreamFormat findUnknown(Context context);
static BitstreamFormat[] findAll(Context context);
String getDescription();
int getID();
String getMIMEType();
int getSupportLevel();
static int getSupportLevelID(String slevel);
void setDescription(String s);
void setMIMEType(String s);
void setSupportLevel(int sl);
void update();
FormatRegistryManager
Following DSpace coding conventions, the factory and static class for a service is named with the suffix -Manager. The FormatRegistryManager class gives access to instances of FormatRegistry. Since a format identifier is directed to a FormatRegistry implementation by its namespace, the Manager also takes care of selecting the right instance for a namespaced identifier. This lets applications use namespaced identifiers without worrying about taking them apart to choose a registry instance.
Since all of the FormatRegistryManager's state is effectively managed by the Plugin Manager, it does not need any state itself and only has static methods.
API
Here is a sketch of the API:
public class FormatRegistryManager
{
// Namespaces for internal format registry - contains only "Unknown"
public static final String INTERNAL_NAMESPACE = "Internal";
// Name of the unknown format:
public static final String UNKNOWN_FORMAT_IDENTIFIER = "Unknown";
// Applications should use this as default mime-type.
public static final String DEFAULT_MIME_TYPE = "application/octet-stream";
// returns possibly-localized human-readable name of Unknown format.
public static String getUnknownFormatName(Context context);
// Returns registry plugin for external format identifier namespace
public static FormatRegistry find(String namespace);
// Returns array of all Namespace strings, even "artifacts" no longer configured.
public static String[] getAllNamespaces(Context context)
throws SQLException, AuthorizeException
// Returns array of all currently Namespaces of external registries.
public static String[] getRegistryNamespaces();
// Calls appropriate registry plugin to import format bound to a namespaced identifier.
// Returns null on error.
public static BitstreamFormat importExternalFormat(Context context, String namespace, String identifier)
throws FormatRegistryException, AuthorizeException
// Calls appropriate registry plugin to update format bound to a namespaced identifier.
// When force is true, update even when external format has not been modified.
public static void updateBitstreamFormat(Context context, BitstreamFormat existing, String namespace, String identifier, boolean force)
throws FormatRegistryException, AuthorizeException
// Calls appropriate registry plugin to compare two namespaced
// identifies (which must be in the same namespace).
public static boolean conformsTo(String nsIdent1, String nsIdent2)
throws FormatRegistryException
// Creates a namespaced identifier out of separate namespace and registry-specific identifier.
public static String makeIdentifier(String namespace, String identifier);
// Returns the namespace or identifier portion of a namespaced identifier.
public static String namespaceOf(String nsIdentifier);
public static String identifierOf(String nsIdentifier)
}
**Format Identification is Separate**
Although some of the tools to automatically identify formats are tied to format registries, this registry interface does *not* have anything to do with format identification. The identification tools are accessed through a separate plugin interface, discussed below.
**FormatRegistry API**
Here is the API of the `FormatRegistry`. The plugin's name is also the DSpace string value representing its namespace. It is implemented as a self-named plugin, so that the instance itself knows its namespace without depending on each DSpace administrator to get it right. The namespaces must be consistent between DSpace installations so that format technical metadata (i.e. PREMIS elements in AIPs) can be meaningfully exchanged.
```java
// implementing classes should extend SelfNamedPlugin
package org.dspace.content.format;
public interface FormatRegistry {
// Typically returns 1 element, the Namespace name of the implementation's registry
String[] getPluginNames();
// Returns the DSpace namespace of this registry.
public String getNamespace();
// Return an URL needed to configure the underlying registry service;
// this allows the registry to configure itself from the DSpace
// configuration.
public URL getContactURL();
// Returns all external identifiers known to be synonyms of the
// given one, in namespaced-identifier format. (Because one registry
// may know about synonyms in other registries.)
public String[] getSynonymIdentifiers(Context context, String identifier)
throws FormatRegistryException, AuthorizeException;
// Import a new data format - returns a BitstreamFormat. There
// not be any existing BSF with the same namespace and identifier.
public BitstreamFormat importExternalFormat(Context context, String identifier)
throws FormatRegistryException, AuthorizeException;
// Compare existing DSpace format against registry, updating anything that's changed.
// NOTE: it does not need to check last-modified date, framework does that.
public BitstreamFormat updateBitstreamFormat(Context context, BitstreamFormat existing, String identifier)
throws FormatRegistryException, AuthorizeException;
// Return date when this entry was last changed, or null if unknown.
public Date getLastModified(String identifier)
throws FormatRegistryException;
// Predicate, true if format named by sub is a subtype or
// otherwise "conforms" to the format defined by fmt.
public boolean conformsTo(String sub, String fmt)
throws FormatRegistryException;
// Free any resources associated with this registry connection,
// since it will not be used any more.
public void shutdown();
}
```
**Registry Name**
Typically the name of the registry is bound to some well-known public constant so it can be referred to in a program without a "magic string" that is easily misspelled to disastrous effect. E.g.:
```java
FormatRegistryManager.INTERNAL_NAMESPACE ... "Internal"
DSpaceFormatRegistry.NAMESPACE ... "DSpace"
ProvisionalFormatRegistry.NAMESPACE ... "Provisional"
```
Implementors of new registries should follow this convention.
**The Import Operation**
*Importing* an entry from an external format registry creates a new local BSF. In order to create a new BSF, there must not be an existing BSF already bound to that entry's identifier (or any identifiers it lists as synonyms).
So, first check for a BSF bound to any of the namespaced identifiers attached to the external registry entry. If one is found, add the new identifier to that BSF and return it instead of creating a new one.
If no existing BSF is found, create a new one, and initialize its properties from appropriate values of registry entry’s technical metadata:
- The entry’s identifier becomes the BSF’s first external identifier.
- Any synonym identifiers in the entry are added to the BSF
- Initialize the BSF’s name, description, MIME type etc.
Binding *all* equivalent (synonym) identifiers in the remote entry ensures that a scenario like this does not create extra local BSFs:
1. Import format “PRONOM:foo” as a BSF.
2. When asked to import format “GDFR:bar”, discover that the GDFR entry lists “PRONOM:foo” as one of its synonyms.
- Creating a new entry for that would result in two entries bound to “PRONOM:foo”, which is illegal.
- Adding “GDFR:bar” as a synonym identifier to the existing BSF solves the problem.
**The Update Operation**
This describes the way a single BSF is updated from its corresponding entry in an external data format registry – the choice of which BSF to update and which registry to query are separate issues covered in the section on administrative actions.
Updating the BSF is a lot like the import operation:
- For metadata items such as Name, Description, etc. replace the value in the BSF if the remote registry’s value is different.
- If the remote entry has any synonym identifiers which are not already listed, add them.
- Do not delete synonym identifiers that are *not* listed since they may have come from another registry.
**The ConformsTo Operation**
Supplied with the identifiers of two entries in the registry, this predicate function returns true if the first format *conforms to* the second. That means, any Bitstream identified as the first format would pass the tests to be identified as the second as well. For example, if the first format is a specific version of a format while the second identifier names a format family which includes it, *conformsTo* would be true.
The registry plugin should implement this operation efficiently, since it may be called many times, e.g. when choosing applications to match the format of a Bitstream.
**The DSpace-Internal "Registry"**
The *unknown* BSF is installed with the system, but for consistency, it is also derived from an entry in an “external” format registry. Since it is the only BSF which is absolutely mandatory, this registry must always be available, so it is a hard-coded registry that is always configured.
The `FormatRegistryManager` maps the namespace DSpace-Internal to a special registry object which only recognizes the “Unknown” format identifier. The first reference to that format identifier, e.g. by the method `BitstreamFormat.findUnknown()`, “imports” it to create the *unknown format* BSF.
Although it might appear that the "Unknown" format really belongs in the DSpace namespace and registry, that would force the DSpace registry to be configured all the time. Putting "Unknown" in a separate built-in registry lets the administrator remove the DSpace registry from the configuration if she wishes to.
**The Built-In Format Registries**
The initial implementation also includes built-in format registries for the DSpace and Provisional registry namespaces. Unlike the DSpace-Internal registry, they are optional. By itself, the DSpace registry reproduces the release 1.4.x behavior to offer the option of backward-compatibility. The Provisional registry offers a separate place to put formats local to the archive, safe from namespace collisions and future updates to the DSpace registry. (It is not always the recommended way to handle new formats, more on this later.)
These two registries share an implementation class since their operation is exactly the same. The plugin manager creates one instance for each registry (i.e. namespace). They use the plugin/namespace name to select a configuration file. The registry's contents come from that file, which is read on startup.
To add entries to the Provisional format registry, the DSpace administrator edits its configuration file (in a documented XML format similar to the current `itstream-formats.xml` initialization file) and restarts any relevant DSpace processes. Since changes should be very infrequent this should not be a burden.
"DSpace" (Traditional) Format Registry
The "DSpace" registry includes most of the traditional, loosely-defined, format names, like *Text*, *Adobe PDF*, *HTML*. It offers a simple solution for DSpace administrators who do not need precise and detailed format identification, nor the digital preservation tools that require it. Since it includes most of the formats from previous DSpace releases under their same names, it also gives a degree of backward-compatibility.
It is *not* necessary to include this registry in the configuration. It can be left out if, e.g., the administrator only wants to use PRONOM formats.
The contents of the "DSpace" registry are controlled by DSpace source code releases and *must* not be altered locally. See the next section to add formats to your archive.
"Provisional" Format Registry
The **Provisional** format registry lets a DSpace administrator add data formats which are *not* available in any other external registry to her DSpace archive. The contents of the "Provisional" registry are strictly under the control of the administrator. It starts out empty.
Using formats from the **Provisional** namespace carries some risks: the format identifiers are meaningless (and useless for preservation) outside of their own DSpace archive. Even another DSpace might not have the same **Provisional** formats configured. Of course, a Provisional format should only be added when it is not available in any shared registry, anyway.
As soon as a new format *does* become available in some external registry, you can add the new external identifier to its `BitstreamFormat`, perhaps updating the BSF's local metadata from its external registry.
Ideally, you will *only* employ Provisional formats when there will eventually be an entry in a globally-recognized registry for the format. For example, if you are adding a format to the GDFR but need to apply it to a Bitstream immediately, before the GDFR editorial process accepts it, you could create it in the **Provisional** registry to have it available immediately. Later, once the GDFR has an entry for it, add the GDFR identifier to the `BitstreamFormat` you already created. Then, DIPs of objects in that format will bear the GDFR format identifier that is recognizable to other archives, and your Bitstreams will also have linkage to any preservation metadata in the GDFR.
This registry is implemented the same way as the "DSpace" registry, reading format information from an XML document that lists all the "Provisional" format, at startup. However, its identifiers occupy a separate namespace so there is no chance of collisions with the data formats provided by the DSpace release.
**Database Schema Changes**
Here are the tables which were added or changed:
### Automatic Format Identification
Experience has shown that even the most knowledgeable submitters rarely understand or care about identifying the data formats of materials they upload. Also, many submissions are done in batch and non-interactive transactions where human intervention is not possible. Thus, we promote automatic format identification as the primary method of assigning formats to Bitstreams, and strive to make it accurate, reliable and efficient.
We propose a configurable and extensible framework for integrating external automatic format identification services. This is the best approach because:
---
- **ExternalFormatNamespace table**, for efficient storage and comparison of External Format Identifier namespaces
```sql
CREATE TABLE ExternalFormatNamespace
(
namespace_id INTEGER PRIMARY KEY,
namespace VARCHAR(128)
);
```
- **BitstreamFormat table**, a cache of references to technical metadata in external format registries
```sql
CREATE TABLE BitstreamFormat
(
bitstream_format_id INTEGER PRIMARY KEY,
name VARCHAR(128),
description TEXT,
mimetype VARCHAR(128),
support_level INTEGER,
canonical_extension VARCHAR(40),
override_name BOOL DEFAULT FALSE,
override_description BOOL DEFAULT FALSE,
override_mimetype BOOL DEFAULT FALSE,
override_canonical_extension BOOL DEFAULT FALSE
);
```
- **ExternalFormatIdentifier table**, map of BitstreamFormat entry to namespaced external identifier
```sql
CREATE TABLE ExternalFormatIdentifier
(
external_format_id INTEGER PRIMARY KEY,
bitstream_format_id INTEGER REFERENCES BitstreamFormat(bitstream_format_id),
namespace_id INTEGER REFERENCES ExternalFormatNamespace(namespace_id),
external_identifier VARCHAR(128),
last_imported TIMESTAMP WITH TIME ZONE
);
```
- **FormatSource table**, normalized values of format_source column since we can expect much repetition.
```sql
CREATE TABLE FormatSource
(
format_source_id INTEGER PRIMARY KEY,
format_source VARCHAR(256) UNIQUE
);
```
- **Bitstream table**
```sql
CREATE TABLE Bitstream
(
bitstream_id INTEGER PRIMARY KEY,
bitstream_format_id INTEGER REFERENCES BitstreamFormat(bitstream_format_id),
name VARCHAR(256),
size_bytes BIGINT,
checksum VARCHAR(64),
checksum_algorithm VARCHAR(32),
description TEXT,
user_format_description TEXT,
source VARCHAR(256),
internal_id VARCHAR(256),
deleted BOOL,
store_number INTEGER,
sequence_id INTEGER,
format_confidence INTEGER DEFAULT 0,
format_source_id INTEGER REFERENCES FormatSource(format_source_id)
);
```
There are already several powerful, open-source, format identifiers available for integration.
- Format identification services are distinct from data format registries (though some are related), so they demand a separate plugin framework.
- Since the science and practice of format identification is still evolving rapidly, DSpace must be flexible to take advantage of new developments.
- Many DSpace sites have unique needs dictated by the types of digital objects that get submitted, perhaps demanding completely custom format identification techniques.
### Format Identification Framework
The framework is a common API to which format identification services conform. This lets DSpace treat them as a "stack" of plugin implementations, trying each one in turn and choosing the best of all their results. The API consists of:
- **FormatHit**
A class encapsulating the information returned by a single potential format identification "hit"
- **FormatConfidence**
A set of enumerated values that quantifies the "confidence" (certainty, accuracy) of format identifications for comparison on a common basis.
- **FormatIdentifier**
Interface of the plugin class that actually identifies formats.
- **FormatIdentifierManager**
Static class to operate the plugin stack and return a format identification verdict.
### The FormatHit Object
A FormatHit is a record of the results of one format-identification match. It contains the following fields:
```java
package org.dspace.content.format;
public class FormatHit implements Comparable<FormatHit> {
// Namespace and identifier of the external format
// (which may not yet exist as a BitstreamFormat)
public String namespace = null;
public String identifier = null;
// Human-readable name of the format, if any (may be null), to aid user in selection
public String name = null;
// Measure of the "confidence" of this hit, one of the constants described below.
public FormatConfidence confidence;
// Record of the method which identified the format.
public String source;
// Any warning from the identification process (to show to user)
public String warning = null;
// Constructor
public FormatHit(String namespace, String identifier, String name, FormatConfidence confidence, String warning, String source);
// Utility to add this hit in the correct place on a results list, implementing the DEFAULT method (not the ONLY method)
public void addToResults(List<FormatHit> results);
// Convenience method to return a single string containing the namespaced format identifier:
public String getIdentifier();
// Convenience method returning resulting BSF
public BitstreamFormat getBitstreamFormat(Context context)
throws SQLException, AuthorizeException, FormatRegistryException
// for Comparable<FormatHit>
public int compareTo(FormatHit o);
// Set the relevant fields in Bitstream with contents of this hit.
public void applyToBitstream(Context c, Bitstream bs)
throws SQLException, AuthorizeException, FormatRegistryException
}
```
Each attempt at automatic identification of a Bitstream’s format returns a Collection of FormatHit objects, representing the possible matches. The list is sorted by accuracy and confidence of hit.
### Confidence of Format Identification Hits
The FormatHit includes a confidence metric, which represents the accuracy and certainty of the identification. It is an enumerated type of ordered, symbolic values implemented as a Java 5 enumeration.
The specific values are described above, under the description of the FormatConfidence object.
FormatHit includes a confidence rating so hits can be compared on the basis of confidence, and so it can be stored in the Bitstream object whose format was identified.
The confidence values have a greater range and granularity than seems possible given DSpace's simple format model; i.e. DSpace does not distinguish between "generic" and "specific" formats. However, the actual automatic format identification is done by plugin implementations, some of which are driven by external format registries. These have access to more sophisticated format models and data, including notions of format granularity, so the confidence metrics reflect that.
**FormatIdentifier Interface**
Automatic format identification is accomplished by plugins implementing the FormatIdentifier interface. Each plugin applies its own technique toward identifying the format of the Bitstream. There is no direct relationship between external data format registries and format identifying plugins: a single plugin can utilize several registries or none, and different plugins can use the same external registry.
Note that the FormatHit returned by the identification process contains an external format identifier, not a BitstreamFormat. The archive administrator is responsible for ensuring that all external format identifiers returned by automatic identification methods can be imported, i.e. that the relevant registries are configured.
Here is the Java interface:
```java
package org.dspace.content.format;
public interface FormatIdentifier
{
// identify format of "target", add hits to "results"
public List<FormatHit> identifyFormat(Context context, Bitstream target, List<FormatHit> results)
throws FormatIdentifierException, AuthorizeException;
}
```
The `identifyFormat()` method attempts to identify the data format of the given Bitstream, and delivers its results by adding a new FormatHit at the appropriate point on the results list. It returns the resulting list (possibly either modified or replaced) as its value; the caller must anticipate that it may be a different Object.
An identifier method can add results to anyplace in the result list, or use the default algorithm implemented by `FormatHit.addToResults()`. It is described in the next section, Implementing Automatic Format Identification.
The identifier method should throw FormatIdentifierException when it encounters a fatal error that prevents it from properly identifying the format of the Bitstream. Otherwise, there would be no way to tell the difference between a Bitstream that does not match any of the formats this method identifies, and a fatal error in the identifying code (e.g. configuration problem), since the results list is simply returned unmodified in both cases.
When the identifier cannot return valid results because of a temporary condition that may be cleared up later – e.g. a network resource that is temporarily unavailable – it can throw the `FormatIdentifierTemporaryException` to indicate that the results may change in the future.
A note on the object lifecycle: One instance of FormatIdentifier is created per JVM; it gets cached and reused. The `identifyFormat()` method is assumed to be thread-safe. If it is not, the implementing class should have it call an internal method which is synchronized on itself.
Typically, only the internal FormatIdentifierManager code ever calls identification methods.
**FormatIdentifierManager**
This is a static class to operate the plugin stack and return a format identification verdict. Applications use it instead of calling the FormatIdentifier plugin stack directly.
```java
package org.dspace.content.format;
public class FormatIdentifierManager
{
// identify all formats matching "target", returning raw hit list.
public static List<FormatHit> identifyAllFormats(Context context, Bitstream target)
throws AuthorizeException;
// identify format of "target", returning best hit (never null).
public static FormatHit identifyFormat(Context context, Bitstream target)
throws AuthorizeException;
// identify format of "target", AND set results in the Bitstream
public static void identifyAndSetFormat(Context context, Bitstream target)
throws SQLException, AuthorizeException, FormatRegistryException
}
```
The `identifyFormat` method always returns a hit. If the Bitstream was not successfully identified, it makes up a hit containing the unknown format.
**Controlling the Format Identification Process**
The format identification framework is based on a sequence plugin, which gives administrators complete freedom to add and rearrange identification methods.
The `FormatIdentifier.identifyFormat()` method is very powerful; it actually controls the entire process of automatic format identification, even though it is called from deep within the framework. The `FormatIdentifierManager` only calls the stack of identifier methods in order and collects the results they provide. Each DSpace administrator has complete control of the methods run and the order of their execution, and the methods determine the results. The format identification API was designed to be very flexible, and also to make it easy to implement new identification methods.
Each implementation of `FormatIdentifier.identifyFormat()` can do whatever it wants with the Bitstream and list of results it is given. It might be a "filter" method that prunes the results of any below a certain level of confidence. It could look at other results and try to refine them, or reorder them.
An archive administrator can even insert a different framework into this one by configuring just one method that calls out to the other framework, and translates its results.
**Implementing Automatic Format Identification**
Many different techniques and software products have been developed to identify the format of a data file; it is still a somewhat mysterious art. The best choice of methods and their ordering depends on your archive's users and the materials they most often submit.
Plugins with the best chance of precisely identifying a format should usually be first on the list, if they use the default result-ordering method that gives priority to the first hits among others of equal confidence.
Note that some plugins may depend on other identification methods running before they do because they refine an identification already found on the results list. Special relationships like that must be well documented so the administrator is aware of them.
Each `FormatIdentifier` plugin applies its special knowledge or resources to attempt to identify the format of the Bitstream; it is not responsible for solving the whole problem. For example, take a plugin that executes a heuristic to detect comma-separated-values files. It might collaborate with another method that detects plain-text files, so that it only applies its algorithm to refine the format identification if it sees from the results that the file is plain text.
**Random Access to Bitstream Contents**
One problem that has not yet been completely addressed by this design is that many format-identification methods require random access to the contents of a Bitstream, but the Bitstream API only offers serial access through a Java `InputStream`. Random access means reading a sequence of bytes from the Bitstream starting at any point in its extent; this is very helpful when looking for an internal signature to identify the file, since the signature may be located relative to the end of the file or at some large offset into it.
There are techniques to compensate for the lack of random access, although they sacrifice efficiency. It may also be necessary to add a method to `Bitstream` to retrieve a random-access stream when the underlying storage implementation supports it.
**Format-Hit Comparison Algorithm**
Follow these steps when comparing two format identification hits to determine which has priority. This is implemented as the method `FormatHit.compareTo()`.
1. If the Namespace of the identifier cannot be resolved (looked up), i.e. because there is no FormatRegistry configured for it, that hit loses. See `FormatRegistryManager.find()`.
2. Order hits by their FormatConfidence index. Thus, hits based on the content of the Bitstream rate more highly than ones based on external attributes like the name.
3. Between two equal hits, if one has a non-null warning it is ranked lower.
4. When a hit has a conflict (that is, there is a lower-ranked hit which disagrees with it because the MIME type is different or similar), it is ranked below hits of the same confidence.
**Default Recommended Format Identification Algorithm**
This is the default algorithm that is implemented by `FormatIdentifier.identifyFormat()` methods that simply call the `FormatHit`'s `addToResults()` method on each hit they develop.
NETE: It is not necessary to use this algorithm. As described above, the format identification process is completely under the control of the `identifyFormat()` method implementations.
1. Start with an empty `results` list.
2. Call the `FormatIdentifier.identifyFormat()` method of each plugin in the sequence in turn:
- Passing it the Bitstream and list of accumulated results so it can add new results.
- If it has a better-confidence match than the current head of the list, that hit becomes the new head of the list.
- Otherwise the hit gets appended to the end of the list.
3. When finished, the head of the list is the best format match.
To select a `BitstreamFormat` from the results, follow these steps:
1. Starting with the first result, take the first format identifier and namespace that can successfully be resolved into a `BitstreamFormat` (importing a new one if necessary).
2. If no `BitstreamFormat` is available, result is the unknown one, and set the confidence to UNIDENTIFIED.
This logic is encapsulated in the `FormatIdentifierManager.identifyFormat()` method.
If an application wants to generate a dialog showing all of the results of an automatic format identification (e.g. to give an interactive user the chance to second-guess the automatically-chosen format) it could call the plugins and process the results according to the algorithm above. We don't anticipate anyone wanting such a service, but if it comes up, we can always add another method to FormatIdentifierManager.
**Applying the results to a Bitstream**
The properties of a Bitstream describing its format and the confidence of its identification have analogues within the FormatHit structure. The logic to map between them is encapsulated within the FormatHit.applyToBitstream() method, so there is only one piece of code to update if either of those objects changes in the future.
**Implementing a FormatIdentifier plugin**
As mentioned before, each FormatIdentifier implementation only has to do part of the job, so it can be very narrowly focused. It can also look at the results of previous methods to decide if it has anything to add to the overall solution. For example, a method that heuristically identifies text-based formats would only proceed if it saw that a previous method had identified the data as generic plain text.
Each method may add several ([FormatHit])s to the result list, or none at all.
As an example of the power and flexibility of this approach, imagine a site that accepts many different kinds of XML formats, and needs precise identification of a few of them (e.g. METS documents of various profiles).
- A signature-based format identifier notices that the file starts with "<?xml" and adds a hit for the generic format "XML", and MIME type text/xml.
- The text heuristic methods don't bother running because there is already a higher-confidence positive identification (of the generic XML format).
- A table-driven specific XML identifier method notices the hit for the generic XML format, and parses enough of the file to match one of the XPath specifications in its configuration. This identifies the file as an IMS Content Package manifest, MIT OCW version 1 profile.
- If the XML parse had failed, the plugin could add a warning to the generic "XML" hit since it was obviously not well-formed XML.
- Results include the "OCW-IMSCP" format first, "XML" next, and perhaps other generic hits after it.
**Handling Conflicts**
A conflict arises when the automatic identification process returns hits for incompatible formats. This is commonly caused by contradictory clues in a Bitstream, for example, a filename extension that a different format than the one indicated by internal signature matches. Consider a Bitstream containing a well-formed XML document; the "internal signature" method correctly identifies it as XML. However, its name ends with ".txt" which is only listed as an external signature for other kinds of formats.
We may wish to record a warning (e.g. in the server log) when the results include such a conflict. There is no place to record it in the data model, but since warnings are mainly of interest to an archive administrator tuning identification or diagnosing problems, the log should be good enough.
**User Interface Elements Related to Data Formats**
As further detailed in the use-case document, here is the minimum set of UI functions that the data format infrastructure has to provide:
1. Displaying a human-readable description of the format to the end-user.
2. Choosing from among all available data formats:
- Just the formats in the BitstreamFormat table.
- All formats available in selected external registries.
3. Selecting a BitstreamFormat from among the results of an automatic format identification, with the option of choosing freely as in Case #2.
4. Administrative interface to add and update BitstreamFormat objects.
5. Administrative interface modify chosen format of Bitstream (existing UI can be used with minimal changes).
**Display**
The UI needs to display the data format of a Bitstream to the user in a meaningful way. Historically, this has been accomplished with the name (formerly "short description") property of the BSF, which is a short human-readable label such as "Adobe PDF 1.2". In some contexts it may be helpful to also cite the confidence property of the Bitstream to indicate how the format was discovered so the user can tell how much to trust it.
There are also some contexts - e.g. when offering a selection among Bitstreams to download - when it is helpful to include the MIME type of each Bitstream as well. Although MIME types are not meaningful to all end-users, they do tell the more technically sophisticated ones what the browser (or other HTTP client) is likely to do with a Bitstream.
To summarize:
- The BSF's name property is the primary user-visible identity of a Bitstream's format.
- The BSF's MIME type is sometimes helpful, some users find it more familiar than arbitrary format names.
- In some contexts, also indicate the confidence of format identification.
- Perhaps make the BSF's description available e.g. through a mouseover.
**Choosing A Format**
First, what range of formats do you want to be able to choose from?
1. All BitstreamFormat names? This typically includes only formats of objects that have already been imported into the archive.
2. All of the formats in a given external registry, or set of them? This is likely to be more complete, but brings with it the problem of handling a very large list, perhaps thousands of choices.
In the first case, the problem is easily solved, but not so useful in most applications; if you are choosing a new format for a Bitstream, why limit your choice only to formats of Bitstreams already in the archive? It is mostly useful when your purpose is to choose among BitstreamFormat{s}, e.g. picking one to edit.
The second case is more helpful when actually selecting a format for a real Bitstream, but brings with it other problems - e.g. how to obtain and navigate formats from an external registry:
Choosing Formats from an External Registry
For good reasons, we propose to sidestep the whole problem of interfacing DSpace internals to an external registry to choose a format, and leave it to be settled between the DSpace user interface and the external registry. Here is why:
Current data format registries are large and growing – PRONOM has over 400 already, the TriD proprietary registry has over 2,600 entries, and the GDFR stands to acquire thousands when it becomes operational. They each have distinctive taxonomy and relationship metadata to help navigate the format space; for example, GDFR is developing a faceted classification system. Although GDFR may emerge as the standard, there is currently no effective standard for data format metadata.
Also, there are potentially several interchangeable DSpace UIs, each with differing capabilities and styles.
Given the complexity of implementing solutions for the cross product of format registries and DSpace UIs, we think it is more productive to let each UI negotiate with the format registry of its choice to produce a navigable display of formats. For example, the UI can transfer control to a popup or dialog encapsulating the registry’s UI or a registry-specific extension. All it has to do is return a format identifier that can be resolved or imported to a BitstreamFormat.
The alternative is to force all registries into a common model, which would probably deprive them of the metadata most helpful to generating a good navigation interface. Each registry has unique features in its data model to facilitate browsing.
Choosing / Confirming Automatic Identification
Since automatic format identification is powerful and fairly reliable, it makes sense to use it to assist users in identifying the format of their submissions, at least by narrowing down the choices. The automatic process yields a list of hits, which should be presented differently from a list of available formats:
- Indicate default choice of format and its MIME type.
- Ordering is significant, hits closer to the head of the list take precedence.
- The confidence metric on each hit is highly significant in helping the user evaluate them, so include it prominently and in a way that makes its values easy for naive users to understand.
- Offer an “escape route” to choose a format from all available formats. This becomes the default if automatic format identification failed.
Editing BitstreamFormat Table
See the next section for details, this is classed as an administrative operation.
Administrative Operations Relating to {{BitstreamFormat}}s
The DSpace administrator manages data formats with these operations:
Initial Setup
Configuration
The external registries are chosen by adding their names to a plugin interface as shown here. Note that the plugin name, which is also the namespace it covers, gets supplied by plugin itself through the getPluginNames() method. The order is not significant. This configuration example includes registries implementing the PRONOM, DSpace, and Provisional namespaces (guessing from the classnames).
Format identifiers are configured in a sequence plugin, as in this example:
```plaintext
plugin.sequence.org.dspace.content.format.FormatIdentifier = \
org.dspace.content.format.DROIDIdentifier, \
org.dspace.content.format_UNIXFileIdentifier, \
org.dspace.content.format.DSpaceFormatRegistry
```
Conversion from Previous DSpace Version
To convert a DSpace 1.5 archive to the renovated BitstreamFormat code, see BitstreamFormat Conversion Instructions for instructions on running the conversion script. It will:
- Alter tables for the new data model.
- Check for alterations to the standard released BitstreamFormat registry and preserve them in the Provisional registry.
- Convert all referenced BitstreamFormats to the new model.
- Optionally use automatic format identification to set any formats that could not be directly converted.
The conversion process alters the archive as little as possible. If the backward-compatible DSpace namespace is configured, existing formats are simply mapped to that registry. Otherwise, every Bitstream has to be automatically re-identified.
Reports
The following reports are helpful to administrators to check and validate format-related configuration options, and to plan preservation activities. They are generated by command-line administrative applications.
Format Identification Testing: Success Rate
This report is intended to let administrators see the effect of changes to the format-identification configuration by testing how existing Bitstreams would now be identified. Given a group of DSpace objects to work against, this test runs the automatic format identification against each member Bitstream but does not change anything.
It reports the number of cases where the re-identification reaches a different result than the existing identification, and optionally shows each one in a detailed report. The report includes:
- Total number of Bitstreams processed
- Count of results with same format but a different confidence metric (“Unknown” failing again counts here).
- Count of results with a different format.
- Count of failures to make any identification.
- Optionally, details about each Bitstream with a different format result:
- Bitstream identification
- Previous format and confidence
- Newly identified format and confidence
Histogram of Formats In Use
Operating over a range of selected Bitstreams, this report shows the number of Bitstreams identified as each format. This lets the administrator see what formats are in use, and the relative proportion of each one, for the selected Bitstreams. It can be helpful when planning for preservation, since it immediately shows the number of Bitstreams in problematic formats.
The report includes, for each BitstreamFormat in use,
- BSF's name
- External identifier(s)
- Count of Bitstreams referring to it.
Edit BitstreamFormat Metadata
Some administrators will undoubtedly have a need to make local customizations to the descriptive and technical metadata for data formats. These attributes of a BitstreamFormat may all be customized by overriding the values imported from the remote registry – and the overrides persist even when the BSF is updated from its external registry.
- Name
- Description
- MIME Type
- Canonical File Extension
- Add or Delete External Format Identifiers
- Change the primary External Format Identifier
Update BSFs From Remote Data Format Registries
Update the local copies of technical metadata from the originals in remote format registries. Options are:
- Update a single BSF
- Update all of the BSFs whose metadata comes from a particular registry (i.e. Namespace).
- Update all BSFs.
A timestamp of last update is maintained for all BSFs. When performing a group update, use the timestamp farthest in the past as the limit when searching for changed formats in the remote registry. After a group update, set the time of last update of all relevant BSFs to the time of this operation.
Edit Bitstream Technical Metadata
Here are the cases where a Bitstream's format technical metadata must be modified:
Retry Format Identification
Rerun the automatic format identification, perhaps after configuration changes or improvements to a remote registry. This is the same operation as the automatic format identification performed on ingest.
It can be done both interactively, for a single Bitstream, or in batch for a set of selected Bitstreams. The interactive UI should offer a choice of viewing and accepting the new identification choice.
Override Format Choice
Manually (interactively) force the choice of a new data format, chosen from either:
- The existing set of BitstreamFormat entries.
- An explicitly specified namespace and identifier referencing an external format, which is imported if necessary.
- This may be a simple text-entry box since it doesn't have to be user-friendly.
Maintain "DSpace" and "Provisional" Format Registries
The DSpace registry is updated by modifying or replacing its XML configuration file. The new contents will be loaded automatically when DSpace is next started.
Similarly, the Provisional registry is maintained by editing its XML configuration file. Its contents depend entirely on the local administrator, however.
Remove an External Format Registry
Before removing an external format registry from the configuration, all references to it must be removed from BSFs. There is a utility administrative tool to manage this automatically, with proper checks, i.e. so it does not delete the last (primary) external format identifier from a BSF which is in use.
In a typical scenario, the archive administrator decides to get rid of one of the external format registry plugins, e.g. the "DSpace" registry. By removing it from all of the BSF entries first, she ensures there will not be any reference failures after it is removed from the plugin configuration – although theoretically, the only normal DSpace operation that would fail is an update from the remote format registry.
Omissions and Future Work
Your Comments
Please use the Discussion tab for your comments on and reactions to this proposal, since comments mingled with this much text would be too hard to find.
Preservation Applications
Although adequate support for data format representation is a necessary foundation for preservation activities, this work does not include any actual preservation functions. These are all subjects for other projects:
Detecting and notifying administrators of obsolete formats in the archive.
- Use Policy Engine to manage reaction to obsolescence.
- Format migration and normalization (migration on ingest).
- Control normalization requirements within Communities and Collections with Policy Engine.
- Data format validation (some of which is already implemented in pending JHOVE integration work).
Container Formats
This proposal does not include any explicit support for features of container formats, that is, data formats such as "tar" and "Zip" which serve primarily as "wrappers" for other data objects. Containers typically implement some or all of the following functions:
- Bundle other files together into a unit.
- Apply data compression to the contents.
- Apply encryption and/or digital signature to the contents.
- Encode the contents as alphanumeric text (e.g. "base64").
- Add metadata to content file(s).
Reasons not to consider explicit support for containers at this time:
1. Container formats are supported as opaque formats without any special properties. They are identified and disseminated with technical metadata. This should be adequate for formats to be preserved as single Bitstreams, e.g. JAR.
2. In some contexts, e.g. package-based ingestion, containers are unwrapped upon ingestion so the container itself is not relevant as a preservation issue.
- The Packager mechanism can be extended with new plugins to ingest and disseminate additional types of packages, so the DSpace can just store the contents and metadata.
3. Putting a container into a digital archive is antithetical to the principles of preservation: it increases the difficulty and risk in recovering the original data by adding another layer of format to interpret.
4. There is currently no compelling use case for generalized container support in the DSpace content model.
Documentation of Data Formats
Attaching thorough documentation about the interpretation of a data format (i.e. standards documents) is an important preservation tool. We do not need to make any provisions for this within DSpace if we trust external data format registries to maintain it as format technical metadata.
In particular, the GDFR architecture allows for a locally-administered GDFR node, with full local copies of all data, to be integrated into the DSpace software. It includes provisions for documentation on interpreting each format. We believe we can rely on the GDFR to solve this problem when it is fully developed.
Please use the Discussion Page for your comments on this page.
Other Documentation
- A presentation on the work described in this wiki at Open Repositories 2008, Southampton, UK - [ Slides](http://mit.edu/sands/www/bfr/Sands%20Fish%20-%20Bitstream%20Renovation.ppt]
- Paper presented at OR08: BitstreamFormat+Renovation_DSpace Gets Real Technical Metadata
|
{"Source-Url": "https://wiki.lyrasis.org/download/temp/pdfexport-20200409-090420-1058-4683/DSPACE-BitstreamFormatRenovation-090420-1058-4684.pdf?contentType=application/pdf", "len_cl100k_base": 15799, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 58021, "total-output-tokens": 17077, "length": "2e13", "weborganizer": {"__label__adult": 0.0002639293670654297, "__label__art_design": 0.0010709762573242188, "__label__crime_law": 0.0003724098205566406, "__label__education_jobs": 0.0017833709716796875, "__label__entertainment": 0.0001226663589477539, "__label__fashion_beauty": 0.00014853477478027344, "__label__finance_business": 0.0004963874816894531, "__label__food_dining": 0.00019419193267822263, "__label__games": 0.0005621910095214844, "__label__hardware": 0.0008726119995117188, "__label__health": 0.00017702579498291016, "__label__history": 0.0005435943603515625, "__label__home_hobbies": 0.00012153387069702148, "__label__industrial": 0.0004041194915771485, "__label__literature": 0.0004100799560546875, "__label__politics": 0.0003066062927246094, "__label__religion": 0.00044035911560058594, "__label__science_tech": 0.05499267578125, "__label__social_life": 0.0001208186149597168, "__label__software": 0.0653076171875, "__label__software_dev": 0.87060546875, "__label__sports_fitness": 0.00017368793487548828, "__label__transportation": 0.0002853870391845703, "__label__travel": 0.00019490718841552737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 77378, 0.01267]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 77378, 0.28139]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 77378, 0.83115]], "google_gemma-3-12b-it_contains_pii": [[0, 2618, false], [2618, 6357, null], [6357, 10646, null], [10646, 12923, null], [12923, 16709, null], [16709, 22277, null], [22277, 25936, null], [25936, 28413, null], [28413, 28435, null], [28435, 29917, null], [29917, 32177, null], [32177, 35861, null], [35861, 40907, null], [40907, 42842, null], [42842, 45393, null], [45393, 48944, null], [48944, 53505, null], [53505, 58971, null], [58971, 64149, null], [64149, 68081, null], [68081, 70920, null], [70920, 74522, null], [74522, 77378, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2618, true], [2618, 6357, null], [6357, 10646, null], [10646, 12923, null], [12923, 16709, null], [16709, 22277, null], [22277, 25936, null], [25936, 28413, null], [28413, 28435, null], [28435, 29917, null], [29917, 32177, null], [32177, 35861, null], [35861, 40907, null], [40907, 42842, null], [42842, 45393, null], [45393, 48944, null], [48944, 53505, null], [53505, 58971, null], [58971, 64149, null], [64149, 68081, null], [68081, 70920, null], [70920, 74522, null], [74522, 77378, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 77378, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 77378, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 77378, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 77378, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 77378, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 77378, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 77378, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 77378, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 77378, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 77378, null]], "pdf_page_numbers": [[0, 2618, 1], [2618, 6357, 2], [6357, 10646, 3], [10646, 12923, 4], [12923, 16709, 5], [16709, 22277, 6], [22277, 25936, 7], [25936, 28413, 8], [28413, 28435, 9], [28435, 29917, 10], [29917, 32177, 11], [32177, 35861, 12], [35861, 40907, 13], [40907, 42842, 14], [42842, 45393, 15], [45393, 48944, 16], [48944, 53505, 17], [53505, 58971, 18], [58971, 64149, 19], [64149, 68081, 20], [68081, 70920, 21], [70920, 74522, 22], [74522, 77378, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 77378, 0.02174]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
606e0703d309d38e6d2b576f51548c1c5cf674cb
|
Mogwaï: a Framework to Handle Complex Queries on Large Models
Gwendal Daniel, Gerson Sunyé, Jordi Cabot
To cite this version:
Gwendal Daniel, Gerson Sunyé, Jordi Cabot. Mogwaï: a Framework to Handle Complex Queries on Large Models. RCIS 2016 - 10th International Conference on Research Challenges in Information Science, Jun 2016, Grenoble, France. Research Challenges in Information Science (RCIS), 2016 IEEE Tenth International Conference on, pp.1-12, <10.1109/RCIS.2016.7549343>. <hal-01344019>
HAL Id: hal-01344019
https://hal.archives-ouvertes.fr/hal-01344019
Submitted on 11 Jul 2016
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Mogwaï: a Framework to Handle Complex Queries on Large Models
Gwendal Daniel
AtlanMod Team
Inria, Mines Nantes & Lina
Gwendal.Daniel@inria.fr
Gerson Sunyé
AtlanMod Team
Inria, Mines Nantes & Lina
Gerson.Sunye@inria.fr
Jordi Cabot
ICREA
UOC
Jordi.Cabot@icrea.cat
Abstract—While Model Driven Engineering is gaining more industrial interest, scalability issues when managing large models have become a major problem in current modeling frameworks. Scalable model persistence has been achieved by using NoSQL backends for model storage, but existing modeling framework APIs have not evolved accordingly, limiting NoSQL query performance benefits. In this paper we present the Mogwaï, a scalable and efficient model query framework based on a direct translation of OCL queries to Gremlin, a query language supported by several NoSQL databases. Generated Gremlin expressions are computed inside the database itself, bypassing limitations of existing framework APIs and improving overall performance, as confirmed by our experimental results showing an improvement of execution time up to a factor of 20 and a reduction of the memory overhead up to a factor of 75 for large models.
Index Terms—Model Query, OCL, Gremlin, Scalability, NoSQL.
I. INTRODUCTION
Model queries are a key concept in Model Driven Engineering (MDE). They constitute the basis for several modeling activities, such as model validation [1], derived features computation [2], constraint specification [3], or model transformation [4].
With the progressive adoption of MDE techniques in the industry [5], [6], existing tools have to increasingly deal with large models, and the scalability of existing technical solutions to store, edit collaboratively, transform, and query models has become a major issue [7], [8]. Large models typically appear in various engineering fields, such as civil engineering [9], automotive industry [10], product lines [11], and can be generated in model-driven reverse engineering processes [12], such as software modernization.
In the last decade, the Eclipse Modeling Framework (EMF) [13] has become the de-facto standard framework for building modeling tools, offering a strong foundation to implement model storage, querying, and persisting functionalities. The popularity of EMF is attested by the large number of available EMF-based tools on the Eclipse marketplace [14], coming from both industry and academia. Therefore, most of the research works aimed at improving modeling scalability target this framework. Nevertheless, EMF was first designed to handle simple modeling activities, and its default serialization mechanism – XMI [15] – has shown clear limitations to handle very large models [16], [17]. Furthermore, XML-based serialization has two important drawbacks: (i) it favors readability instead of compactness and (ii) XMI files have to be entirely parsed to obtain a navigational model of their contents. The first reduces performance of I/O access operations, while the second increases the memory consumption to load and query a model, and limits the use of proxies and partial loading to inter-document relationships. In addition, XMI implementations do not provide advanced features such as transactions or collaborative edition, and large monolithic model files are challenging to integrate in existing versioning systems [18].
CDO [19] was designed to address those issues by providing a client-server repository structure to handle large model in a collaborative environment. CDO supports transactions and provides a lazy-loading mechanism, which allows the manipulation of large models in a reduced amount of memory by loading only accessed objects. Recently, the increasing popularity of NoSQL databases has led to a new generation of persistence frameworks that store models in scalable and schema-less databases. Morsa [16], [20] is one of the first approaches that uses NoSQL databases to handle very large models. It relies on a client-server architecture based on MongoDB and aims to manage scalability issues using document-oriented database facilities. NeoEMF [21] is another persistence framework initially designed to take advantage of graph databases to represent models [17], [22]. It has been extended to a multi-backend solution supporting graph and key-value stores and can be configured with application-level caches to limit database accesses.
While this evolution of model persistence backends has improved the support for managing large models, they are just a partial solution to the scalability problem in current modeling frameworks. In its core, all frameworks are based on the use of low-level model handling APIs. These APIs are then used by most other MDE tools in the framework ecosystem to query and update models. Since these APIs are focused on manipulating individual model elements and do not offer support for generic queries, all kinds of queries required by model-based tools must be translated into a sequence of API calls for individual accesses. This is clearly inefficient when combined with persistence frameworks because (i) the API granularity is too fine to benefit from the advanced...
query capabilities of the backend and (ii) an important time and memory overhead is necessary to construct navigable intermediate objects needed to interact with the API (e.g. to chain the sequence of fine-grained API calls required to obtain the final result).
To overcome this situation, we propose the Mogwäi, an efficient and scalable query framework for large models. The Mogwäi framework translates model queries written in OCL into expressions of a graph traversal language, Gremlin, which are directly used to query models stored in a NoSQL backend. We argue that this approach is more efficient and scalable than existing solutions relying on low-level APIs. To evaluate our solution, we perform a set of queries extracted from MoDisco [12] software modernization use-cases and compare the results against existing frameworks based on EMF API.
The paper is organized as follows: Section II introduces Gremlin, a language to query multiple NoSQL databases. Section III presents the architecture of our tool. Section IV and V introduces the transformation process from OCL expressions to Gremlin and the prototype we have developed. Section VI describes the benchmarks used to evaluate our solution and the results. Finally, Section VII presents related works and Section VIII summarizes the key points of the paper, draws conclusions, and presents our future work.
II. THE GREMLIN QUERY LANGUAGE
A. Motivation
NoSQL databases are an efficient option to store large models [17], [20]. Nevertheless, their diversity in terms of structure and supported features make them hard to unify under a standard query language to be used as a generic solution for our approach.
Blueprints [23] is an interface designed to unify NoSQL database access under a common API. Initially developed for graph stores, it has been implemented by a large number of databases such as Neo4j, OrientDB, and MongoDB. Blueprints is, to our knowledge, the only interface unifying several NoSQL databases1.
Blueprints is the base of the Tinkerpop stack: a set of tools to store, serialize, manipulate, and query graph databases. Gremlin [24] is the query language designed to query Blueprints databases. It relies on a lazy data-flow framework and is able to navigate, transform, or filter a graph. It can express graph traversals finely and shows positive performance results when compared to Cypher, the pattern matching language used to query the Neo4j graph database [25].
Therefore, we choose Gremlin as our target language as it is the most mature and generic solution nowadays to query a wider variety of NoSQL databases.
B. Language description
Gremlin is a Groovy domain-specific language built on top of Pipes, a data-flow framework based on process graphs. A process graph is composed of vertices representing computational units and communication edges which can be combined to create a complex processing. In the Gremlin terminology, these complex processing are called traversals, and are composed of a chain of simple computational units named steps. Gremlin defines four types of steps:
- **Transform steps**: functions mapping inputs of a given type to outputs of another type. They constitute the core of Gremlin: they provide access to adjacent vertices, incoming and outgoing edges, and properties. In addition to built-in navigation steps, Gremlin defines a generic transformation step that applies a function to its input and returns the computed results.
- **Filter steps**: functions to select or reject input elements w.r.t. a given condition. They are used to check property existence, compare values, remove duplicated results, or retain particular objects in a traversal.
- **Branch steps**: functions to split the computation into several parallelized sub-traversals and merge their results.
- **Side-effect steps**: functions returning their input values and applying side-effect operations (edge or vertex creation, property update, variable definition or assignation).
In addition, the step interface provides a set of built-in methods to access meta information: number of objects in a step, output existence, or first element in a step. These methods can be called inside a traversal to control its execution or check conditions on particular elements in a step.
Gremlin allows the definition of custom steps, functions, and variables to handle query results. For example, it is possible to assign the result of a traversal to a variable and use it in another traversal, or define a custom step to handle a particular processing.
---
1Implementation list is available at https://github.com/tinkerpop/blueprints
---
(a) Metamodel
(b) Instance Model
(c) Persisted Model
Fig. 1. Sample Metamodel and Model
As an example, Figure 1(a) shows a simple metamodel representing Packages and Classes. Packages are named containers owning Classes through their ownedElements reference. An instance of this metamodel is shown in Figure 1(b) and its graph database representation is shown in Figure 1(c). Grey vertices represents Package and Class metaclasses and are linked to their instance through instanceof edges. The package p1 is linked to classes c1 and c2 using ownedElements edges.
In what follows, we describe some simple Gremlin examples based on this model. A Gremlin traversal begins with a Start step, that gives access to graph level informations such as indexes, vertex and edge lookups, and property based queries. For example, the traversal below performs a query on the classes index that returns the vertices indexed with the name Package, representing the Package class in the Figure 1(a). In our example, this class matches vertex 1.
\[
\text{g.idx("classes")[[name:"Package"]]: } // \rightarrow v(1)
\]
The most common steps are transform steps, that allow navigation in a graph. The steps outE(re1) and inE(re1) navigate from input vertices to their outgoing and incoming edges, respectively, using the relationship rel as filter. inV and outV are their opposite: they compute head and tail vertices of an edge. For example, the following traversal returns all the vertices that are related to the vertex 3 by the relationship ownedElements. The Start step g.v(3) is a vertex lookup that returns the vertex with the id 3.
\[
\text{g.v(3).outE("ownedElements").inV: } // \rightarrow [v(4), v(5)]
\]
Filter steps are used to select or reject a subset of input elements given a condition. They are used to filter vertices given a property value, remove duplicate elements in the traversal, or get the elements of a previous step. For example, the following traversal collects all the vertices related to vertex 3 by the relationship ownedElements that have a property name with a size longer than 1 character.
\[
\text{g.v(3).outE("ownedElements").inV.\ has("name").\filter{it.name.length > 1}]: } // \rightarrow [v(4), v(5)]
\]
Branch steps are particular steps used to split a traversal into sub queries, and merge their results. As an example, the following traversal collects all the id and name properties for the vertices related to vertex 3 by the relationship ownedElements. The computation is split using the copySplit step and merged in the parent traversal using exhaustMerge.
\[
\text{g.v(3).outE("ownedElements").inV.\ copySplit{ }\_().\ name\_().\ id \_.\ exhaustMerge();}
\]
Finally, side-effect steps modify a graph, compute a value, or assign variables in a traversal. They are used to fill collections with step results, update properties, or create elements. For example, it is possible to store the result of the previous traversal in a table using the Fill step.
\[
\text{def table = []}; \text{g.v(3).outE("ownedElements").inV.\ has("name").\filter{it.name.length > 10}.\fill(table); } // \rightarrow [v(4), v(5)]
\]
III. THE MOGWAI FRAMEWORK
The Mogwai framework is our proposal for handling complex queries on large models. As discussed above, we will assume that those large models are stored in a NoSQL backend with Gremlin support. On the modeling side we will also assume that queries are expressed in OCL (Object Constraint Language), the OMG standard for complementing graphical languages with textual descriptions of invariants, operation contracts, derivation rules, and query expressions.
More precisely, the Mogwai approach relies on a model-to-model transformation that generates Gremlin traversals from OCL queries which are then directly computed by any Blueprints database. The results of the query are then translated back to the modeling framework resulting in the set of modeling objects that satisfies the query expression.
Figure 2 shows the overall query process of (a) the Mogwai framework and compares it with (b) standard EMF\textsuperscript{2} API based approaches.
An initial textual OCL expression is parsed to transform it into an OCL model conforming to the OCL metamodel. This model constitute the input of a model-to-model transformation that generates the corresponding Gremlin model. The Gremlin model is then expressed as a text string conforming to the Gremlin grammar and sent to the Blueprints database for its execution.
The main difference with existing query frameworks is that the Mogwai framework does not rely on the EMF API to perform a query. In general, API based query frameworks translate OCL queries into a sequence of low-level API calls, which are then performed one after the other on the database. While this approach has the benefit to be compatible with every EMF-based application, it does not take full advantage of the database structure and query optimizations. Furthermore, each object fetched from the database has to be reified to be navigable, even if it is not going to be part of the end result. Therefore, execution time of the EMF-based solutions strongly depends on the number of intermediate objects reified from the database (which depends on the complexity of the query but also on the size of the model, bigger models will need a larger number of reified objects to represent the intermediate steps) while for the Mogwai framework, execution time does not depend on the number of intermediate objects, making it more scalable over large models.
Once the Gremlin traversal has been executed on the database side, the results are returned to the framework that reifies those results into the corresponding model elements. With this architecture, it is possible to plug our solution on top of various persistence frameworks and use it in multiple contexts.
To sum up, the transformation process generates a single Gremlin traversal from an OCL query and runs it over the database. This solution provides two benefits: (i) delegation of the query computation to the database, taking full advantage
\textsuperscript{2}We focus the explanation on the EMF framework but results are generalizable to all other modeling frameworks we are familiar with.
of the built-in caches, indexes, and query optimizers; and (ii) single execution compared to fragmented queries with the EMF API, removing intermediate object reification.
IV. OCL TO GREMLIN TRANSFORMATION
A. Mapping of OCL expressions
To illustrate the different phases of the transformation, we introduce a running example: Listing 1 shows a simple query (on a model conforming to Figure 1(a)) that selects the Packages instances which contains at least one element through the ownedElements reference. The transformation process that generates the Gremlin traversal in Listing 2 relies on the mappings shown in Table 1 to translate individual OCL expressions into Gremlin steps. In this Section we detail how the different steps of the traversal are generated using this mapping. In the next Section we present how the input OCL syntax tree is processed and generated steps are linked together to produce the complete Gremlin query.
Listing 1. Sample OCL Query
```java
def sampleSelect: res : Set(Package) = Package.allInstances().select(e | e.ownedElements->isEmpty())
```
Listing 2. Generated Gremlin Textual Traversal
We have divided the supported OCL expressions into four categories based on Gremlin step types: transformations, collection operations, iterators, and general expressions. Note that other types of OCL expressions that are not explicitly listed in the table can be first expressed in terms of those that appear there [26] and therefore be also covered by our mapping.
The first group, transformation expressions, returns a computed value from their input. Expressions that navigate the elements in the model are mapped to navigation steps: Type access is translated into an index query returning the vertex representing the type, assuming the type exists. In the example, the Package type is mapped to the index call g.idx("classes")[[name:Package]], and the result vertex is stored in a dedicated variable to reduce database accesses. The AllInstances collection is mapped to a traversal returning adjacent vertices on the Type vertex having an instanceof outgoing edge (inE("instanceof")..outV). Reference and attribute collect operations are respectively mapped to an adjacent vertex on the reference name and a property step accessing the attribute. Type conformance is checked by comparing the adjacent instanceof vertex with the type one using a generic transform step. Finally, attribute and reference collect from type casting are mapped as regular collect operation, because each vertex in the database contains its inherited attributes and edges.
The second group, operations on collections, needs a particular mapping because Gremlin step content is unmodifiable and cannot support collection modifications natively. Union, intersection and set subtraction expressions are mapped to the fill step, which puts the result of the traversal into a variable. We have extended Gremlin by adding union, intersection, and subtract methods that compute the result of those operations from the variables storing the traversed elements. Including operation is translated to a gather step, that collects all the objects and process the gathered list with a closure that adds the element to includes. The list is then transformed back to a step input by using the scatter step. Excluding operations can be achieved by using the except step, that removes from the traversal all the elements in its argument. A transformation of the step content into a Groovy collection is done to handle includes and excludes operations, which are then mapped to a containment checking. Finally, functions returning the size and the first element of a collection are mapped to count() and first() step methods. Note that there is no specific method to check if a collection is empty in Gremlin but this can be achieved by calling a Groovy collection transformation.
Iterator expressions are OCL operations that check a condition over a collection, and return either the filtered collection or a boolean value. Select is mapped to a filter step with the translation of the condition as its body. In the example the body of the select operation contains an implicit collect on the reference ownedElements and a collection operation isEmpty(), that are respectively mapped to outE("ownedElements").inv and toList().isEmpty(). Reject is mapped the same way with a negation of its condition. Exists and forall mapping follow the same schema: a filter step with the condition or its negation is generated and the number of results is analyzed. Finally, general operations (comparisons, boolean operations, variable declaration, and literals) are simply mapped to their Groovy equivalent.
The mapping presented in this Section produces all the Gremlin steps of the result traversals. In the next section, we detail the processing of the input OCL expression and how these steps are linked to produce the complete Gremlin query shown in Listing 2.
B. Transformation Process
1) OCL Metamodel: The input of the transformation is an OCL model representing the abstract syntax tree of the
<table>
<thead>
<tr>
<th>OCL expression</th>
<th>Gremlin step</th>
</tr>
</thead>
<tbody>
<tr>
<td>Type</td>
<td>g.idx([classes '[name: 'Type' ]]')</td>
</tr>
<tr>
<td>allInstances()</td>
<td>inE('instanceof').outV</td>
</tr>
<tr>
<td>attribute</td>
<td>attribute</td>
</tr>
<tr>
<td>collect(attribute)</td>
<td>outE('reference').inV</td>
</tr>
<tr>
<td>collect(reference)</td>
<td>o.outE('reference').inV</td>
</tr>
<tr>
<td>reference (implicit collection)</td>
<td>o.outE('instanceof').inV.transform{it.next() == C}</td>
</tr>
<tr>
<td>oclAsTypeOf(C)</td>
<td>attribute</td>
</tr>
<tr>
<td>oclAsType(C).attribute</td>
<td>o(outE('reference').inV</td>
</tr>
<tr>
<td>col1 «union(col2)</td>
<td>col1.fill(var1); col2.fill(var2); union(var1, var2);</td>
</tr>
<tr>
<td>col1 «intersection(col2)</td>
<td>col1.fill(var1); col2.fill(var2); intersection(var1, var2);</td>
</tr>
<tr>
<td>col1 «col2 (Set subtraction)</td>
<td>col1.fill(var1); col2.fill(var2); subtract(var1, var2);</td>
</tr>
<tr>
<td>including(object)</td>
<td>gather{it < < object;}.scatter;</td>
</tr>
<tr>
<td>excluding(object)</td>
<td>except(object);</td>
</tr>
<tr>
<td>includes(object)</td>
<td>?toList().contains(object)</td>
</tr>
<tr>
<td>excludes(object)</td>
<td>!(toList().contains(object))</td>
</tr>
<tr>
<td>size()</td>
<td>count()</td>
</tr>
<tr>
<td>first()</td>
<td>first()</td>
</tr>
<tr>
<td>isEmpty()</td>
<td>toList().isEmpty()</td>
</tr>
<tr>
<td>select(condition)</td>
<td>c.filter{condition}</td>
</tr>
<tr>
<td>reject(condition)</td>
<td>c.filter{!condition}</td>
</tr>
<tr>
<td>exists(expression)</td>
<td>filter{condition}.hasNext()</td>
</tr>
<tr>
<td>forAll(expression)</td>
<td><img src="#" alt="filter{condition}.hasNext()" /></td>
</tr>
<tr>
<td>+=, -=, /, %, *</td>
<td>+, -, /, %, *</td>
</tr>
<tr>
<td>and, or, not</td>
<td>&&,</td>
</tr>
<tr>
<td>variable</td>
<td>variable</td>
</tr>
<tr>
<td>literals</td>
<td>literals</td>
</tr>
</tbody>
</table>
*Results of index queries are stored in dedicated variables to optimize database accesses.
OCL query to perform. Figure 3 presents a simplified excerpt of the OCL metamodel\(^3\). In the OCL, a `Constraint` is a named top-level container that contains a `specification` described in an `ExpressionInOCL` element. This expression is composed of an `OCLExpression` representing its `body`, a `context variable` (self), and may define `result` and `parameter` `variables`. An `OCLExpression` can be a type access (`TypeExp`), a variable access or definition (`VariableExp`), or an abstract call expression (`CallExp`). `CallExp` are divided into three subclasses: `OperationCallExp` representing OCL operations, `PropertyCallExp` representing property navigations (attribute and reference accesses), and `IteratorExp` representing iteration loops over collections. These `IteratorExp` elements define an `iterator Variable`, and contains a `body OCLExpression` representing the expression to apply on each element of their input. Finally, expressions can be chained by the `CallExp source` reference representing the element the call apply on, or by being an `argument` of an `OperationCallExp`. In the OCL metamodel we use, all the operations are encapsulated into `OperationCallExp` elements. The actual identifier of the operation is contained in the `name` attribute.
Figure 4 shows the instance of the OCL metamodel representing the abstract syntax tree for the sample query presented in Listing 1. The top level element `Constraint sampleSelect` contains the `context variable` `self` of the
\(^3\)The complete OCL metamodel we use is available at [http://tinyurl.com/hof89by](http://tinyurl.com/hof89by)
query, its result variable (res), and an ExpressionInOCL element representing the query itself. Each expression in the OCL metamodel is linked to its source expression, in charge of computing the object/s on which the next expression will be applied. In the example the ExpressionInOCL body contains the root expression in the source tree of the query (the select in this case). This select iterator has the allInstances operation as its source, which has itself a source reference on the TypeExp Package (meaning that we iterate over the whole population of the Package class). It also defines an iterator variable e and a body tree (representing the expression to evaluate over each element of the source collection) starting with the isEmpty operation that is the root of the expression. This operation is applied on the result of the ownedElements property navigation, which has a source reference to a VariableExp expression that refers to the iterator e.
Fig. 4. OCL Query Syntax Tree
2) Gremlin Metamodel: The output of our model-to-model transformation is a Gremlin model. Since Gremlin does not have a metamodel-based representation of its grammar we propose our own Gremlin metamodel. As Gremlin is a Groovy based language, it could have been possible to reuse the Java or Groovy metamodels but they are too large for our needs and miss an easy way to define the concept of step, a core concept specific to Gremlin.
Figure 5 presents the Gremlin metamodel we use in our approach. In this metamodel, a GremlinScript is defined by a set of Instructions that can be TraversalElements, VariableDeclarations, or Expressions. Supported Expressions are unary and binary comparisons, boolean operations, and Literals. UnaryExpressions and BinaryExpressions contain respectively one and two inner Instructions. A TraversalElement is a single computation step in a Gremlin traversal. It can be either a Gremlin Step, a VariableAccess, or a MethodCall. TraversalElements are organized through a composite pattern: each Step has a next containment reference that links to the next TraversalElement in the chain. In the Gremlin terminology, such a chain of computation is called a traversal. Step elements are the core concept in the Gremlin language. We represent each step presented in Section II and the ones defined in the Gremlin documentation as subclasses of the Step class. Steps subclasses can contain attributes, like InEStep or OutEStep, that contain the label of the edge to navigate. EdgesStep and VerticesStep correspond to edge and vertex lookup (g.E() and g.V()). Finally, FilterSteps are particular Steps that contain a reference to a Closure, that is defined by a set of Instructions that are applied on each filtered element.
For the sake of readability we only put the key concepts in this metamodel excerpt. In particular, we omit an important number of Steps and MethodCalls, as well as the concrete subclasses of supported unary and binary expressions. A complete definition of the metamodel is provided in the project repository.
Figure 6 presents the instance of the Gremlin metamodel corresponding to the traversal shown in Listing 2. The top-level GremlinScript contains two instructions. The first one is a VariableDeclaration that defines the variable packageV. The value of this variable is defined by a Gremlin traversal composed of a Start step (the initial access to the graph), an IndexCall representing the index query returning the vertex representing the metaclass Package, and a NextCall that unroll the step content and returns the vertex. The second instruction is a VariableAccess, representing the access to the variable defined in the previous instruction. This access is the begining of a second traversal composed of navigation steps (inE, OutV), and a Filter. This last step contains a Closure, representing the boolean condition of the Filter. This Closure is composed of two instructions: a VariableDeclaration that is mapped to the closure iterator, and a VariableAccess followed by a navigation, a ToList cast, and a Groovy IsEmpty check. As shown in the example, it is possible to compose Steps and regular Groovy MethodCalls in order to create a traversal.
Fig. 6. Generated Gremlin Syntax Tree
3) Transformation Execution: To create a complete Gremlin traversal, the Mogwaí framework needs to process the AST model representing the syntax tree of the OCL query. In this Section we present how the input OCL query is navigated and how the different elements produced by the mappings
---
4http://tinyurl.com/j9hloxr
5http://tinyurl.com/peuyu32
presented in Table I are assembled to create the final Gremlin script. For the sake of clarity, we provide an overview of the transformation in Figure 7, which presents how an input OCL Query Model (1) is processed to produce the output Gremlin Traversal Model (9).
The transformation starts by processing (2) the top-level OCL Constraint and generates the corresponding GremlinScript element (3). The input model is then inspected to find if the context variable self is accessed in the Constraint body. If it is the case, a VariableDeclaration element is created. The value of the created variable is not set at the model level, it will be binded by the framework before executing the query (see Section V). The same processing is performed to generate VariableDeclarations from parameter variables.
Once this is done, the transformation collects all the TypeExp element in the input model, and generates corresponding traversals containing an IndexCall (4) that returns the vertex from the database index representing the accessed type. The results of these calls are stored in VariableDeclaration elements. These variables are created to improve the execution performance of the generated script by caching index results and limit database access. During this step, a mapping between generated variables and TypeExp elements is computed. This mapping is then reused in the transformation to transform every TypeExp into a VariableAccess element.
In the Gremlin language, it is not possible to merge natively two traversals that do not have the same start step. Furthermore, Groovy Collection API does not provide methods to merge or subtract two collections and return the updated collection (methods such as addAll and removeAll return a boolean value, and thus can not be used as the input of the next computation step). A consequence of this limitation is that it is not possible to express in a single traversal union, intersection, and set subtraction operations. To handle these expressions, it is necessary to split the input OCL query (5) into several traversals representing each part of the operation. To handle that, the transformation collects all the root elements of each part (source and argument) of union, intersection, and set subtraction operations in the input model, and creates VariableDeclarations to store the result of the subexpressions. Each root element corresponds to an OCL expression (6.1, 6.2) that will be translated into a single traversal (7.1, 7.2). During this step, helper functions that compute the results of these OCL operations are also generated.
The elements created in the different steps of the transformation are then merged (8) inside the GremlinScript to produce the output Gremlin Traversal Model (9).
To better illustrate how the transformation works, we discuss now how the Mogwa framework transforms the OCL expression in Listing 1 to the final Gremlin expression shown in Figure 6 (abstract syntax tree) and Listing 2 (final textual expression).
As an initial step, the transformation has to preprocess the OCL model to first find the access to context and parameter variables. In our example, the input OCL expression does not contain references to those variables, and no VariableDeclaration are created. Then, the transformation collects the TypeExp Package and creates the corresponding VariableDeclaration packageV. The value of the created variable is defined by the traversal composed of a StartStep, an IndexCall, and a Next method call, representing the query performed on the class index returning the vertex corresponding to the metaclass Package.
Then, the transformation collects union, intersection, and set subtraction operations and computes their source and argument root elements. In our example, there is no such operation, and this phase simply returns the root element of the entire OCL expression.
Once this is done, we can start with the actual transformation of the OCL expressions computed in the pre-processing phase. To generate the first step of a traversal, the root expression in the source chain is retrieved and transformed according to Table I. In the example, the type access TypeExp is transformed into a VariableAccess (the one defined during the pre-processing phase). Next elements in the traversal are generated by processing the source containment tree in a post-order traversal where transformed OCL nodes are mapped and linked to the previous generated step using the next reference. In the example, this processing generates the Gremlin nodes inE(‘instanceof’) and outV corresponding to the allInstance expression.
Iterator operations need a particular processing: their body has to be transformed as well. In the example, the select iterator is transformed into a filter step containing a closure that represents its body. The body expression is parsed starting from the root element and generated steps are linked together. In Figure 6, body expression is mapped to variableAccess, outE(‘ownedElements’) and inV, toList and isEmpty, corresponding respectively to the iterator access, collect(ownedElements), and isEmpty OCL expressions. The iterator Variable generates a VariableDeclaration instruction. The name of the iterator Variable is assigned to the generated VariableDeclaration, and its value contains the closure it value, that represents the current element processed. This variable shadowing is necessary to avoid it erasure in nested iterators.
Finally, if the OCL expression ends with an Union, intersection, or set subtraction operation, or if it is the last one in the argument expression, the transformation generates a Fill step that ends the traversal and puts the results in the dedicated variable defined in the initial step. Then, if the result of the operation is the source of another OCL expression, the transformation generates another traversal that starts with a MethodCall element representing the call to the helper function generated in the initial step.
To better illustrate this particular processing, we present the transformation process of the simple OCL expression shown in Listing 3.
```
pl.ownedElements.name→union(
p2.ownedElements.name).size();
```
Listing 3. Sample Union OCL Query
This expression collects the names of the elements contained in the packages p1 and p2, merge them using an union operation, and returns the size of the computed collection. As stated before, the transformation starts by processing the input model in order to find source and argument root elements of the union operation. In this example, this processing returns the VariableExp p1 and p2. The transformation then generates two VariableDeclarations named union1 and union2 to store the results of the subexpressions pl.ownedElements .name and p2.ownedElements.name (line 1–2 in Listing 4). Then each subexpression is translated according to the process presented before, and the resulting traversals are affected to the generated VariableDeclaration using a Fill step (lines 3–4). In addition, the transformation generates the helper function union(col1,col2) that performs the union of two collections (lines 5–7). Finally, the transformation processes the size call, that has the result of the union call as its source. A MethodCall is generated that represents a call to the helper function union, and constitute the input of the last traversal that computes the size of the collection (line 8).
```
vart union1:
p1.outE(‘ownedElements’).inV.name.fill(union1);
vart union2:
p2.outE(‘ownedElements’).inV.name.fill(union2);
def union(col1,col2) {
// union helper body
}
union(union1,union2).count();
```
Listing 4. Sample Union Traversal
Once the traversal model has been generated, it is then parsed to produce the textual Gremlin query that is finally processed by our tool as described in the next section.
V. TOOL SUPPORT
A prototype implementation of the Mogwa framework is provided as part of NeoEMF [17], a NoSQL persistence framework built on top of the EMF. It is implemented as an extension of the framework and supports query translation, execution, and result reification from Blueprint’s persisted models. The framework presents a simple query API, that accepts a textual OCL expression or an URI to an OCL file containing the expression to transform. In addition, the query API accepts input values that represents self and parameter variables.
Initial OCL queries are parsed using Eclipse MDT OCL [27] and the output OCL models constitute the input of a set of 70
ATL [4] transformation rules and helpers implementing the mapping presented in Table I and the associated transformation process (Section IV-B3). As an example, Listing 5 shows the transformation rule that creates a filter step from an OCL select operation. The next step is computed by the getContainer helper, which returns the parent of the element in the source tree. The instructions of the closure are contained in an ordered set, to ensure the instruction defining the iterator variable (rule var2def) is generated before the body instructions. Finally, the select body is generated, using the helper getFirstInstruction that returns the root element in a source tree.
```java
rule select2filter {
from
s : OCL!IteratorExp (s.getOpName() = 'select')
to
f : Gremlin!FilterStep {
closure ← cl,
next ← select, getContainer(),
cl : Gremlin!Closure(
instructions ← OrderedSet[
.append(this.Module.var2def(select.iterator, first)))
.append(select.body, getFirstInstruction())
)
}
}
```
Listing 5. Select to Filter ATL Transformation Rule
Once the Gremlin model is generated by the transformation, it is expressed using its textual concrete syntax and input values corresponding to context and parameter variables are bound to the ones defined during the transformation. The resulting script is sent to an embedded Gremlin engine, which executes the traversal on the database and returns the result back to Neo4j that reifies it to create a navigable EMF model. The reification process is done once the query has been entirely executed, and the constructed model only contains the query result objects, removing the memory overhead implied by created objects from intermediate steps of the traversal.
Finally, it is also possible to provide input elements to the Mogwaï framework to check invariants, compute a value, or by created objects from intermediate steps of the traversal. The reification process is done once the query has been entirely executed, and the constructed model only contains the query result objects, removing the memory overhead implied by created objects from intermediate steps of the traversal.
VI. EVALUATION
In this section, we evaluate the performance of the Mogwaï framework to query EMF models, in terms of memory footprint and execution time. Results are compared against performance of different querying APIs/strategies (EMF-Query, standard Eclipse OCL, IncQuery, and Mogwaï) on top of the NeoEMF/Graph backend with Neo4j.
A complementary comparison of the increased level of performance due to the uses of NoSQL solutions over SQL ones has been done in previous work [21].
Experiments are executed on a computer running Fedora 20 64 bits. Relevant hardware elements are: an Intel Core i7 processor (2.7 GHz), 16 GB of DDR3 SDRAM (1600 MHz) and a SSD hard-disk. Experiments are executed on Eclipse 4.4.1 (Luna) running Java SE Runtime Environment 1.7. To run our queries, we set the virtual machine parameters `-server` and `-XX:+UseConcMarkSweepGC` that are recommended in Neo4j documentation.
<table>
<thead>
<tr>
<th>Plug-in</th>
<th># LOC</th>
<th>XMI Size</th>
<th># Elements</th>
</tr>
</thead>
<tbody>
<tr>
<td>org.eclipse.gmt.modisco.java</td>
<td>22074</td>
<td>20.2 MB</td>
<td>80664</td>
</tr>
<tr>
<td>org.eclipse.jdt.core</td>
<td>3285068</td>
<td>420.6 MB</td>
<td>1557006</td>
</tr>
</tbody>
</table>
A. Benchmark presentation
The experiments are run over two large models automatically generated using the MoDisco Java Discoverer. MoDisco is a reverse engineering tool able to obtain complete (low-level) models from Java code. The two example models are the result of applying Modisco on two Eclipse Java plug-ins: the MoDisco plug-in itself and the Eclipse Java Development Tools (JDT) core plug-in. Table II shows the details of the experimental sets in terms of number of line of code (LOC) in the plug-in, resulting XMI file size and number of model elements.
To compare the scalability of the different approaches, we perform several queries on the previous models. To simulate a realistic setting, these queries are taken from typical MoDisco software modernization use cases. Queries retrieve:
- **InvisibleMethods**: collects the set of private and protected methods of a Java project.
- **Grabats09**: returns the set of static methods returning their containing class (singleton patterns).
- **ThrownExceptions**: returns the list of exceptions thrown in each package of the plug-in.
- **TextElementInJavadoc**: returns the textual contents of the Javadoc tags in comments of the input Model element.
- **EmptyTextInJavadoc**: returns the empty textual contents of the Javadoc tags in comments of the input Model element.
The first three queries start with an allInstances call, which is an important bottleneck for EMF API based query frameworks [28]. The fourth perform a partial navigation from the input Model element, and returns all the textual contents in the it javadoc comments. The last query navigates the model the same way, but only returns empty comments. Table III shows the number of intermediate objects loaded using EMF API (#Interm.) and the size of the result set (#Res.) to give an idea of the query complexity.
Correctness of the translation has been checked by comparing manually the results of the Mogwaï framework against results of the Eclipse OCL interpreter. In addition, we provide several test suites in the project repository\(^7\) that check the validity of the translation of single OCL expression and multiple expression composition.
All the queries are executed under two memory configurations: the first one is a large virtual machine of 8 GB and the second is a small-one of 250 MB. This allows us to compare the different approaches both in normal and stressed memory conditions.
---
\(^7\)Query benchmarks can be found at http://tinyurl.com/nhp6pq
\(^8\)http://tinyurl.com/jrj66og
To summarize these results, the Mogwā framework is an interesting solution to perform complex queries over large models. Using query translation approach, gains in terms of execution time and memory consumption are positive, but the results also show that the overhead implied by the transformation engine may not be worthwhile when dealing with relatively small models or simple queries.
The main disadvantage of the Mogwā framework concerns its integration to an EMF environment. To benefit from the Mogwā, other Eclipse plug-ins need to be explicitly instructed to use it. Integration with the Mogwā framework is straightforward but must be explicitly done. Instead, other solutions based on the standard EMF API provide benefits in a transparent manner to all tools using that API.
VII. RELATED WORK
There are several frameworks to query models, specially targeting the EMF framework (including one or more of the EMF backends mentioned in Section I). The main ones are Eclipse MDT OCL [27], EMF-Query [29] and IncQuery [30].
Eclipse MDT OCL provides an execution environment to evaluate OCL invariants and queries over models. It relies on the EMF API to navigate the model, and stores allInstances results in a cache to speed up their computation. EMF-Query is a framework that provides an abstraction layer on top of the EMF API to query a model. It includes a set of tools to ease the definition of queries and manipulate results. Compared to the Mogwā framework, these two solutions are strongly dependent on the EMF API, providing on the one hand an easy integration in existing EMF applications, but on the other hand they are unable to benefit from all performance advantages of NoSQL databases due to this API dependency.
EMF-IncQuery [30] is an incremental pattern matcher framework to query EMF models. It bypasses API limitations using a persistence-independent index mechanism to improve model access performance. It is based on an adaptation of a RETE algorithm, and query results are cached and incrementally updated using the EMF notification mechanism to improve performance. While EMF-IncQuery shows great execution time performances [1] when repeating a query multiple times on a model, the results presented in this article show mitigated performances for single evaluation of queries. This is not the case for our framework. Caches and indexes must be build for each query, implying a non-negligible memory overhead compared to the Mogwā framework. In addition, the initialization of the index needs a complete resource traversal, based on EMF API, which can be costly for lazy-loading persistence frameworks.
Alternatively, other approaches that target the translation of OCL expressions to other languages/technologies [31] are also relevant to our work. For example, Heidenreichin et al. [32] propose a solution to automatically build a database from a UML representation of an application, and translate the OCL invariants into database constraints. A similar approach was proposed by Brambilla et al. [33] in the field of web applications. In that case, queries are translated into triggers.
or views. Nevertheless, in all these scenarios the goal is to use
OCL for code-generation purposes as part of a data validation
component. Similar generative approaches exist also for other
pairs of query and target languages [34]. Once generated, there
is no link between the code and the models and therefore it
cannot be used to speed up the model queries. In addition,
all these approaches perform the translation at compilation-
time, whereas the Mogwai framework translates OCL queries
to Gremlin at runtime.
De Carlos et al. [35], [36] present the Model Query Trans-
lator (MQT), an approach similar to the Mogwai framework
that translates EOL [37] queries into SQL. MQT uses a
metamodel-agnostic database schema to store models, and it
extends EOL to produce optimized SQL queries executed on
the database side. Our translation approach is different because
it relies on a model-to-model transformation to produce Gremlin
traversals from OCL queries, allowing runtime execution of
the transformation as well as preparation of the traversals
at compilation time. In addition, graph-based navigation of
models removes the overhead implied by complex joins, and
the Gremlin language is expressive enough to translate the
entire OCL.
Beyond the EMF world, proprietary meta-modeling tools
provide specific query languages. This is the case of
MetaEdit+ [38] from MetaCase, a commercial tool that sup-
ports the development of domain-specific languages, which
provides a proprietary query language. This is also the case of
ConceptBase [39], a deductive object manager for conceptual
modeling and meta-modeling, which provides O-Telos, a query
language for deductive databases.
Efficient model queries can also be linked to live models
and Models@Run.Time [40], which aims to create adaptive
software that keeps a model representation of the running
system during the execution. In this environment, models be-
come decisional artifacts that are queried during the execution
to take decisions, compute metrics, or retrieve information.
In this context, time and memory consumption are critical
aspects, since the decisions (i.e., the queries) have to be
taken as quickly as possible in a stressed and concurrent
environment. The results presented in this article show that the
Mogwai framework can be an interesting candidate to handle
these queries both in term of memory consumption and time
performance.
VIII. CONCLUSIONS AND FUTURE WORK
In this paper we presented the Mogwai, a framework that
generates Gremlin traversals from OCL queries in order to
maximize the benefits of using a NoSQL backend to store large
models. OCL queries are translated using model-to-model
transformation into Gremlin traversals that are then computed
on the database side, reducing the overhead implied by the
modeling API and the reification of intermediate objects. We
also presented a prototype integrated in NeoEMF/Graph, a
scalable model persistence framework that stores models into
database graph databases. Our experiments have shown that the
Mogwai framework outperforms existing solutions in terms of memory
consumption (up to a factor of 75) and execution time (up to
a factor of 20) to perform complex queries over large models.
Model transformations intensively rely on model queries to
match candidate elements to transform and navigate source
model. Integrating the Mogwai framework in model trans-
formation engines to compute these queries directly on the
database could reduce drastically execution time and memory
consumption implied by the transformation of large models.
Another possible solution to enhance transformation engines
would be to translate the transformation itself into database
queries. This approach would allow to benefit of the Mogwai
framework improvements for model queries as well as element
creation, deletion, or update.
As future work, we plan to study the definition of Gremlin’s
custom steps and to optimize collection operations to produce
more readable traversals. Moreover, while the Gremlin lan-
guage defines update operations, these modifications cannot
---
Footnote: Gremlin is written using the Groovy programming language, which is a
dynamic imperative language for the Java platform.
be expressed using standard OCL, which is a side-effect free language. We plan to combine our OCL support with imperative constructs [41] allowing the efficient execution of complex update operations as well. We also plan to study the impact of semantically-equivalent OCL expressions [26] on generated traversals. With this information, it could be possible to improve the quality of the traversals by first applying an automatic refactoring on the OCL side.
Finally, we would like to study the integration of the Mogwaï framework into model persistence solutions that do not rely on a Gremlin compatible database. For instance, we test our model-to-model transformation based approach over SQL databases.
REFERENCES
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01344019/document", "len_cl100k_base": 11281, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 44838, "total-output-tokens": 13000, "length": "2e13", "weborganizer": {"__label__adult": 0.0002963542938232422, "__label__art_design": 0.00037598609924316406, "__label__crime_law": 0.0002448558807373047, "__label__education_jobs": 0.0007853507995605469, "__label__entertainment": 7.396936416625977e-05, "__label__fashion_beauty": 0.00014531612396240234, "__label__finance_business": 0.0002651214599609375, "__label__food_dining": 0.0002343654632568359, "__label__games": 0.00046324729919433594, "__label__hardware": 0.0005588531494140625, "__label__health": 0.0002720355987548828, "__label__history": 0.00027298927307128906, "__label__home_hobbies": 8.26716423034668e-05, "__label__industrial": 0.0003345012664794922, "__label__literature": 0.0002796649932861328, "__label__politics": 0.0001894235610961914, "__label__religion": 0.0003657341003417969, "__label__science_tech": 0.026092529296875, "__label__social_life": 9.816884994506836e-05, "__label__software": 0.011199951171875, "__label__software_dev": 0.95654296875, "__label__sports_fitness": 0.0001958608627319336, "__label__transportation": 0.00038313865661621094, "__label__travel": 0.0001882314682006836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57161, 0.01565]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57161, 0.39768]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57161, 0.84763]], "google_gemma-3-12b-it_contains_pii": [[0, 1136, false], [1136, 6300, null], [6300, 11036, null], [11036, 17199, null], [17199, 22293, null], [22293, 26935, null], [26935, 31536, null], [31536, 34133, null], [34133, 40076, null], [40076, 45908, null], [45908, 49036, null], [49036, 53272, null], [53272, 57161, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1136, true], [1136, 6300, null], [6300, 11036, null], [11036, 17199, null], [17199, 22293, null], [22293, 26935, null], [26935, 31536, null], [31536, 34133, null], [34133, 40076, null], [40076, 45908, null], [45908, 49036, null], [49036, 53272, null], [53272, 57161, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57161, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57161, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57161, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57161, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57161, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57161, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57161, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57161, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57161, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57161, null]], "pdf_page_numbers": [[0, 1136, 1], [1136, 6300, 2], [6300, 11036, 3], [11036, 17199, 4], [17199, 22293, 5], [22293, 26935, 6], [26935, 31536, 7], [31536, 34133, 8], [34133, 40076, 9], [40076, 45908, 10], [45908, 49036, 11], [49036, 53272, 12], [53272, 57161, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57161, 0.09938]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
8099d3e3c108e587a36ca245a0135152aeb53c93
|
[REMOVED]
|
{"Source-Url": "https://www.sci.unich.it/~fioravan/papers/2016-DFMPP-LOPSTR.pdf", "len_cl100k_base": 12720, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 55989, "total-output-tokens": 15534, "length": "2e13", "weborganizer": {"__label__adult": 0.00035762786865234375, "__label__art_design": 0.0006213188171386719, "__label__crime_law": 0.0005383491516113281, "__label__education_jobs": 0.0019855499267578125, "__label__entertainment": 0.00012683868408203125, "__label__fashion_beauty": 0.0002167224884033203, "__label__finance_business": 0.00252532958984375, "__label__food_dining": 0.00054168701171875, "__label__games": 0.0008187294006347656, "__label__hardware": 0.0010080337524414062, "__label__health": 0.0008029937744140625, "__label__history": 0.00034165382385253906, "__label__home_hobbies": 0.0001723766326904297, "__label__industrial": 0.0010690689086914062, "__label__literature": 0.0004968643188476562, "__label__politics": 0.0003964900970458984, "__label__religion": 0.0004782676696777344, "__label__science_tech": 0.203125, "__label__social_life": 0.0001245737075805664, "__label__software": 0.0222625732421875, "__label__software_dev": 0.7607421875, "__label__sports_fitness": 0.0002560615539550781, "__label__transportation": 0.0008630752563476562, "__label__travel": 0.0002199411392211914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51221, 0.0429]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51221, 0.34177]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51221, 0.82912]], "google_gemma-3-12b-it_contains_pii": [[0, 2514, false], [2514, 5945, null], [5945, 9628, null], [9628, 13073, null], [13073, 15702, null], [15702, 18000, null], [18000, 21343, null], [21343, 24532, null], [24532, 27198, null], [27198, 30648, null], [30648, 34925, null], [34925, 38251, null], [38251, 41309, null], [41309, 44567, null], [44567, 47891, null], [47891, 51221, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2514, true], [2514, 5945, null], [5945, 9628, null], [9628, 13073, null], [13073, 15702, null], [15702, 18000, null], [18000, 21343, null], [21343, 24532, null], [24532, 27198, null], [27198, 30648, null], [30648, 34925, null], [34925, 38251, null], [38251, 41309, null], [41309, 44567, null], [44567, 47891, null], [47891, 51221, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51221, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51221, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51221, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51221, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51221, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51221, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51221, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51221, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51221, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51221, null]], "pdf_page_numbers": [[0, 2514, 1], [2514, 5945, 2], [5945, 9628, 3], [9628, 13073, 4], [13073, 15702, 5], [15702, 18000, 6], [18000, 21343, 7], [21343, 24532, 8], [24532, 27198, 9], [27198, 30648, 10], [30648, 34925, 11], [34925, 38251, 12], [38251, 41309, 13], [41309, 44567, 14], [44567, 47891, 15], [47891, 51221, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51221, 0.01838]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
00c57ee2104fc9252d6847501571d855d0a4fcae
|
Abstract: Systems that are built using low-power computationally-weak devices, which force developers to favor performance over security; which jointly with its high connectivity, continuous and autonomous operation makes those devices specially appealing to attackers. ASLR (Address Space Layout Randomization) is one of the most effective mitigation techniques against remote code execution attacks, but when it is implemented in a practical system its effectiveness is jeopardized by multiple constraints: the size of the virtual memory space, the potential fragmentation problems, compatibility limitations, etc. As a result, most ASLR implementations (specially in 32-bits) fail to provide the necessary protection. In this paper we propose a taxonomy of all ASLR elements, which categorizes the entropy in three dimensions: (1) how, (2) when and (3) what; and includes novel forms of entropy. Based on this taxonomy we have created, ASLRA, an advanced statistical analysis tool to assess the effectiveness of any ASLR implementation. Our analysis show that all ASLR implementations suffer from several weaknesses, 32-bit systems provide a poor ASLR, and OS X has a broken ASLR in both 32- and 64-bit systems. This is jeopardizing not only servers and end users devices as smartphones but also the whole IoT ecosystem. To overcome all these issues, we present ASLR-NG, a novel ASLR that provides the maximum possible absolute entropy and removes all correlation attacks making ASLR-NG the best solution for both 32- and 64-bit systems. We implemented ASLR-NG in the Linux kernel 4.15. The comparative evaluation shows that ASLR-NG overcomes PaX, Linux and OS X implementations, providing strong protection to prevent attackers from abusing weak ASLRs.
Keywords: security; internet of things address space layout randomisation; vulnerability analysis; protection techniques
1. Introduction
Address Space Layout Randomization (ASLR) is a well-known, mature and widely used protection technique which randomizes the memory address of processes in an attempt to deter forms of exploitation [1], which rely on knowing the exact location of the process objects. Rather than increasing security by removing vulnerabilities from the system, as source code analysis tools [2] tend to do, ASLR is a prophylactic technique which tries to make it more difficult to exploit existing vulnerabilities [3].
Unlike other security methods [4,5], the security provided by ASLR is based on several factors [6], including how predictable the random memory layout of a program is, how tolerant an exploitation technique is to variations in memory layout and how many attempts an attacker can make practically.
ASLR is a wide spectrum protection technique, in the sense that rather than addressing a special type of vulnerability, as the renewSSP [7] does, it jeopardizes the programming code [8]
of the attackers independently of the vector [9] used to inject code or redirect the control flow. Similarly to other mitigation techniques, ASLR mitigates code execution attacks by crashing the application, and so the attack is degenerated into a denial of service.
The ASLR is an abstract idea which has multiple implementations [10–13], though there are important differences in performance and security coverage between them. We therefore need to make a clear distinction between the core concept of ASLR, which is typically described as something which “introduces randomness in the address space layout of user space processes” [14], and the exact features of each implementation.
Although ASLR is more than 14 years old [15], it is still a very effective protection against modern attacks [16,17] there is still a lot of work and innovations to be done, both on the design and the implementation. For example, the implementation of the KASLR (Kernel ASLR), which loads kernel code and drivers or modules in random positions [18] is still under development and improvement in some ecosystems such as the IoT [19] and other 32-bit low powered devices.
This paper is organized as follows: a full taxonomy of the ASLR is presented in Section 2, followed by a critical analysis of limitations of current ASLR designs in Section 3. Later, we introduce ASLRA in Section 4, a tool to automatically analyze and detect ASLR weaknesses. Section 5 presents the weaknesses found in Linux, PaX and OS X. Then in Section 6 we describe the constraints that must be taken into account when designing a practical ASLR. In Section 7 we propose a new ASLR named ASLR-NG, which overcome all limitations and weaknesses. Section 8 evaluates our proposal in a real implementation, and finally, Section 9 concludes the paper.
2. ASLR Taxonomy
In this section, we present our novel taxonomy which consider three different dimensions to assess practical ASLR implementations. This taxonomy revealed the security issues described in Section 5 and are used to develop the assessment tool presented in Section 4.
Depending on the exact implementation details there may be important differences in the final operation and effectiveness of the ASLR, and so in order to understand and compare these differences between ASLR implementations, it is necessary to have a detailed definition of all memory objects and how they can be randomized.
In what follows, a memory object is classed as a block of virtual memory allocated to a process, examples of which include the stack, the executable, a mapped file, an anonymous map and the vDSO. For our purposes, the size and the base address of each object are the most relevant attributes.
Several objects may be allocated together (in consecutive addresses) with respect to a random address base, which will be referred to as the area or as the zone. For example, in Linux, all objects allocated via the mmap() system call are placed side by side in the mmap_area area.
We have categorized the ASLR into three main dimensions: (1) When, (2) What and (3) How. The first dimension defines how often randomization takes place, the second determines which objects are randomized and the last one defines how and how much the objects are randomized.
2.1. Dimension 1: When
The When dimension defines when the entropy used to randomize the object is generated. For example, per-exec is when the random addresses are generated when a new image is loaded in memory. Linux ASLR randomizes all objects per-exec, so once the process has been created, all subsequent objects (mmap requests) are located side by side. On the other hand, OS X only randomizes libraries per-boot and libraries addresses are shared among all executables even after they are relaunched.
Per-deployment: The application is randomized when it is installed in the system. This form of randomization, also known pre-linking, was proposed by Bojinov et al. [20] as a mechanism to provide randomization on systems that do not support position independent code (PIC) or relocatable code.
Per-boot: The randomization is done every time the system boots. That is, the random value or values used to map objects are generated a boot time (see Figure 1). This form of randomization is typically used on systems whose shared libraries are not compiled as PIC, and so the loader has to relocate the memory image to match the actual addresses. This technique for sharing libraries has some drawbacks:
- Once a library has been relocated in memory, it is no longer a copy of the file, but it has been modified to match the current virtual addresses where it has been loaded. Therefore, all subsequent requests to map that library shall use the same addresses in order to share the same pages.
- Since the memory image does not match the file image (because it has been customized for the current position) it is not possible to use the file itself as swap-in area of the image. A full swap-out and swap-in sequence on a swap device is necessary.
PIC code is implemented using relative addressing (the compiler emits offsets with respect to the program counter (PC) rather than absolute directions). Unfortunately, PC relative addressing modes are not available on the i386 architecture, which makes the code slightly larger and slower.
The effectiveness of ASLR has pushed some processor developers to include relative addressing in their new architecture families. The x86_64 architecture implements relative for 64 bits instruction set.
Per-exec: The randomization takes place when new executable image is loaded in memory (see Figure 1). In the literature, this form of randomization is known “pre-process randomization”. But it must be pointed out that the randomization takes place when a new image is loaded (via the exec() system call) rather than when a new process is created by calling fork().
Per-fork: The randomization takes place every time a new process is created (forked). Recently, Kangjie et al. [21] proposed RuntimeASLR, which implements an aggressive re-randomization of all the fine-grain objects of every child after fork(). This solution sets ASLR of the forked/clone processes at the same level than the one achieved by an exec(). Unfortunately, Unix API defines that child processes inherit the memory layout of their parents, and so RuntimeASLR breaks compatibility.
It is still possible to have a per-fork randomization, while preserving API compatibility if randomization is only applied to new objects. Current ASLR designs allocate new objects in consecutive addresses, but it is possible to re-randomize the base address after a fork, so that new objects are unknown to parent and siblings.
Per-object: The object is randomized when it is created (see Figure 2). Note that, objects that are at a constant distance from another already mapped object are not considered to be randomized on a per-object basis, even if the reference object is randomized. Note that if the position of one of the two objects is leaked, then the position of the other is immediately known.
2.2. Dimension 2: What
The second dimension determines the granularity of What is randomized. Whereby the more objects are randomized, the better. Some security researchers consider that if a single object (for example, the executable) is not randomized, ASLR can be considered broken. A more aggressive form of ASLR is when the randomization is applied to the logical elements contained in the memory objects: processor instructions [22], blocks of code [11,23], functions [12,24,25] and even to the data structures of the application [26]. In these cases, the compiler, the linker and the loader are required to play a more active role. Although these solutions are more advanced and secure, they are still not included in current systems.
2.3. Dimension 3: How
The way an object is randomized is defined by the last dimension: How. It is possible to consider two sub-dimensions: (1) how many bits are random and (2) what is the randomness between objects (inter-object). That is, the absolute entropy of the object by itself and the conditional entropy between objects. Regarding how many bits of the address are randomized, there are two forms: full-VM and partial-VM. Partial-VM is when memory space is divided in disjointed ranges to generate random numbers for the addresses. In Full-VM complete virtual memory space is used to randomize an object. The requirements required to carry out a full-VM are analyzed in the next sections. As far as we know, current ASLR implementations only use partial-VM randomization.
Partial VM: The virtual memory space is divided in ranges or zones (see Figure 3). Each zone contains one or more objects. Typically, zones do not overlap, and so, each zone defines a proper subset of the memory space. In most implementations, only a small range of the memory space is used, that is, the union of all the ranges does not cover all the virtual memory. Partial VM randomization greatly simplifies the implementation of ASLR because object collision are prevented, but the effective entropy is reduced.
Full VM: All the virtual memory space is used to randomize the objects (see Figure 3). When this randomization is used, the order of the main areas (exec, heap, libs, stack, ...) are no longer honored. Special care must be taken to avoid overlapping and collisions. The effective entropy is greatly incremented. As far a we know, no current ASLR design uses the full VM randomization.
**Isolated-Object:** The object is randomly mapped with respect to any other (see Figure 4). Some attacks rely on knowing the address of an object to exploit another one because there is a correlation between them [27,28]. In order to be effective, the correlation entropy of the isolated objects with respect to the rest of objects must be greater than the absolute entropy of each one. Therefore, an information leak of the position of an isolated-object gives no hint of the memory layout of the process but the leaked object.

**Correlated-Object:** The object is placed taking into account the position of another one (see Figure 4). The position of a correlated object is calculated as a function of the position of another object and a random value. When two objects are mapped together, side by side, they are fully correlated. Some examples of correlated objects are: Linux libraries and PaX thread stacks and libraries.
**Sub-page:** The offset bits of the page are also randomized (see Figure 5). By default, ASLR implementations use the processor virtual memory paging support to randomize objects. If no additional entropy is added, addresses are page aligned. Depending on the type of object (shared object, contains data or code, swap constraints, etc.) sub-page randomization may be implemented transparently. For example, the stack and the heap have sub-page randomization in current Linux.

**Direction:** Up/down search side used on a first-fit allocation strategy (see Figure 6). When allocating objects together, new objects can be placed at higher addresses that the ones already mapped (bottom-up) or at lower addresses (top-down). The direction is used to decide the side to place new objects. There are two situations when the direction is necessary:
- When objects are randomly mapped, it may occur that (if not prevented) a randomly generated address collides with an existing object, in this case, the direction determines the side of the existing object (up or down) where the new one will be mapped.
- When objects are mapped together: the direction bit determines how the area or zone grows.
Note that the direction is not a global parameter or feature of the ASLR, but its scope can be determined on a per-object or per-zone basis.
**Bit-Slicing:** The address of an object is the concatenation of two, or more, random numbers which are obtained at different times (for example, at boot and at exec), as shown in Figure 7. For example, this form can be used when a subset of the addresses must be aligned to a fixed value because of performance reasons. In this case, the alignment can be randomly chosen at boot time, and then align all the mappings to that value. Later, all address bits may be randomized except the aligned bits which are set to the value chosen previously.
**Specific-zone:** A base address and a direction where objects are allocated together. Taking into account security aspects, the more random bits and the more independent are the mappings the better. On the other side, if the goal is to reduce the overhead, the more compact the layout the better. Specific-zones defines a mechanism that can be used to group together objects of the same or similar level of security, and isolate each group from others of different level of security/criticality Figure 8 shows an example of how the objects allocated to an specific-zone are mapped.
Table 1 summarizes all form of randomization grouped by the dimensions When, What and When. Those are the fundamental features to assess the quality and robustness of ASLR implementations.
Table 1. Summary of randomization forms and dimensions.
<table>
<thead>
<tr>
<th>Feature</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>When</td>
<td></td>
</tr>
<tr>
<td>Per-boot</td>
<td>Every time the system is booted.</td>
</tr>
<tr>
<td>Per-exec</td>
<td>Every time a new image is executed.</td>
</tr>
<tr>
<td>Per-fork</td>
<td>Every time a new process is spawned.</td>
</tr>
<tr>
<td>Per-object</td>
<td>Every time a new object is created.</td>
</tr>
<tr>
<td>What</td>
<td></td>
</tr>
<tr>
<td>Stack</td>
<td>Stack of the main process.</td>
</tr>
<tr>
<td>LD</td>
<td>Dynamic linker/loader.</td>
</tr>
<tr>
<td>Executable</td>
<td>Loadable segments (text, data, bss, ...).</td>
</tr>
<tr>
<td>Heap</td>
<td>Old-fashioned dynamic memory of the process: brk().</td>
</tr>
<tr>
<td>vDSO/VVAR</td>
<td>Objects exported by the kernel to the user space.</td>
</tr>
<tr>
<td>ARGV</td>
<td>Command line arguments and environment variables of the process.</td>
</tr>
<tr>
<td>Mmaps/libs</td>
<td>Objects allocated calling mmap().</td>
</tr>
<tr>
<td>How</td>
<td></td>
</tr>
<tr>
<td>Partial VM</td>
<td>A sub-range of the VM space is used to map the object.</td>
</tr>
<tr>
<td>Full VM</td>
<td>The full VM space is used to map the object.</td>
</tr>
<tr>
<td>Isolated-object</td>
<td>The object is randomized independently from any other.</td>
</tr>
<tr>
<td>Correlated-object</td>
<td>The object is randomized with respect to another.</td>
</tr>
<tr>
<td>Sub-page</td>
<td>Page offset bits are randomized.</td>
</tr>
<tr>
<td>Bit-slicing</td>
<td>Different slices of the address are randomized at different times.</td>
</tr>
<tr>
<td>Direction</td>
<td>Top-down/bottom-up search side used on a first-fit allocation strategy.</td>
</tr>
<tr>
<td>Specific-zone</td>
<td>A base address and a direction where objects are allocated together.</td>
</tr>
</tbody>
</table>
The presented randomization forms are analyzed in Sections 5 and 8, to properly assess the ASLR effectiveness of Linux, PaX and OS X. This taxonomy is key not only to assess current ASLR effectiveness but also to develop future ASLR designs and implementations, specially in 32-bit systems were the available virtual memory map introduces a challenge to all ASLRs.
3. ASLR Limitations
In this section we analyze the problems and limitations present in all ASLR implementations employed by modern operating systems. The main limitation is introduced by the fact that there are objects (e.g., stack, heap) that need to grow in memory at runtime. This requires to divide the virtual memory region to ensure that those objects can grow.
In more detail, the Linux and PaX (FreeBSD, HardenedGentoo and others use the PaX ASLR approximation) ASLR designs rely on the same core ideas, in that they define four partial-VM areas: (1) stack, (2) libraries/mmaps, (3) executable and (4) heap. The classic memory layout was designed by considering that some zones or objects are growable (main stack, thread stacks and heap). Ideally, a growable object is a contiguous area of memory which can be expanded or shrunk dynamically according to the needs of the program. In order to allow them to grow, they are typically placed in the border (top or bottom) of virtual memory, far away from other objects, otherwise they will not grow or, even worse, a silent collision could occur.
Originally, the functionality of growable objects was a smart, simple and efficiently solution for efficient memory usage. However, the use of threaded applications and the possibility of adding dynamically new objects into the memory space forced developers to reconsider the viability of these growable areas. Today, growable objects are a source of numerous problems [29,30], but fortunately a set of advanced programming solutions has been developed.
Growable objects impose strong limitations on ASLR design, and they affect negatively the entropy of all objects. The situation gets even worse when multiple growable objects are used in the application, as actually happens with multiple thread stacks. The approach used in Linux (see Figure 9) involves placing each object as separately as possible from each other, which forces to fix high bits of the addresses, thus degrading effective entropy. Since the extremes of the virtual memory
are already occupied, libraries and mmaps are placed between the stack and the heap. Note that a small shared library is automatically mapped by the kernel into the address space of all user-space applications (vDSO). Therefore, both static and dynamically linked (PIE or not) programs have a similar memory layout.
Figure 9. Classic memory layout.
Originally, PIE-compiled applications were loaded jointly with the libraries, but after the Offset2lib weakness [27] was identified, the PIE executable was moved to lower addresses (Linux 4.1) in its own zone.
3.1. Stacks
There are two different kinds of stacks, namely the stack of the main process, and the stack of the threads or clones (since both thread and cloned entities handle stacks in a similar way, in what follows we will refer to them jointly as ‘thread stacks’). The main stack is still considered and handled as a growable object.
Initially, thread stacks were ‘set up’ to be growable. Flags MAP_GROWSDOWN and MAP_GROWSUP were added to the mmap() request, to tell the kernel about expected behavior. Inevitably, these flags were removed [29] because of security problems and intrinsic logical limitations. Nowadays, thread stacks are treated as regular (non-growable) objects, reserved with the maximum estimated size when the threads are created. By default, the thread stack size is set to 8 Mb (in Debian and Ubuntu), but it can be changed by using the RLIMIT_STACK resource with the setrlimit() system call. Note that the RLIMIT_STACK value is used as the default thread stack size rather than an upper limit.
A summary of the facilities provided by the system (compiler and standard library), to deal with growable stack issues, is presented below:
- One or more protected pages (page guards) are placed at the end of the thread stack. If the stack overflows, then the process receives a SIGSEGV signal. This guard is further enforced by the GCC flag -fstack-check, which emits extra code to access sequentially all the pages of the stack, thus preventing overflowing by jumping over the page guard.
- The split stack feature (GCC flag -fsplit-stack) generates code to automatically continue the stack in another object (created via mmap()), before it overflows. As a result, the process has discontinuous stacks which will only run out of memory if the program is unable to allocate more memory. This is an interesting feature for threaded programs, as it is no longer necessary to calculate the maximum stack size for each thread. This is currently only implemented in the i386 and x86_64 back-ends running in GNU/Linux.
• It is possible to ask the compiler to print stack usage information for the program, on a per-function basis, using the `-fstack-usage` flag and making an estimation of stack size.
Although these facilities are very helpful when dealing with stacks, in practice most applications work with the default 8Mb sequential stack (Google Chrome(r), LibreOffice, Firefox, etc.). Only very demanding applications have stack size issues, which are typically handled by slightly increasing the `RLIMIT_STACK` limit value. For example, Oracle(r) advises to set it to 10MB when running its database.
3.2. Heap
When the process needs more heap memory, it calls the `brk()` system call to move forward (higher addresses) the heap’s end. The operating system tries to expand sequentially the heap object to provide the requested memory. The `brk()` request may fail if (1) there is not enough free memory contiguous to the existing heap, because the end of the memory has been reached or because another object is already in that address, or (2) the data segment limit has been exceeded, as set by the `RLIMIT_DATA` resource.
The heap is used by the standard C library to provide dynamic storage allocation (DSA): `malloc()` and `free()`. Although originally DSA algorithms relied exclusively on heap memory, current implementations use multiple non-contiguous objects of memory requested by `mmap()`. In fact, the GNU libc uses `mmap()` when the size is larger than 128 Kb. Also, `brk()` was marked as LEGACY in SUSv2 and removed in POSIX.1-2001.
4. ASLRA: ASLR Analyzer
In this section we present the tool we have developed to effectively assess ASLR implementations. ASLRA discovered several weaknesses, as Section 5 describes, and also assessed the effectiveness of the proposed ASLR-NG in Section 7.
Although it is possible to analyze how ASLR has been implemented in the operating system to calculate the intended or expected entropy, there are too many interactions and adjustments between the code that generates the random values and code that finally assigns the address to the object. In fact, we have detected several security issues by observing the external entropy of both Linux and PaX [31–33].
Peter Busser developed a tool called `paxtest` (included in most Linux distributions) to estimate, among other security features, the entropy of memory objects. It uses a custom `ad hoc` algorithm to guess the effective entropy bits. This algorithm has been designed assuming that the underlying distribution is uniform with a power of 2 range. When these conditions do not hold, the result is incorrect. Also, it does not provide basic statistical information about the observed distribution. For example, PaX suffers from non-uniform weaknesses (see Section 5.2) on most mappings, and so it is overestimated by the `paxtest` tool.
We have developed ASLRA, a test suite, which can be used to measure and analyze the entropy of all the objects. ASLRA is composed of two separated utilities:
**Sampler:** Launches a large number of processes and collects the addresses of selected memory objects.
**Analyzer:** The resulting sampled file is processed to calculate several statistical parameters (see Figures 10–12). Besides the basic ones: range, mean, median mode and standard deviation, the analyzer calculates four different entropy estimators: (1) flipping bits, (2) individual byte Shannon entropy, (3) Shannon entropy with variable width bins and (4) Shannon 1-spacing estimator [34]. The tool also provides information about memory fragmentation, conditional entropy, and multiple plots (histogram, distribution, etc.).
We have used the term “entropy” in this paper to refer to the amount of randomness exhibited by an object. In information theory, entropy is used as a measure of the amount of information provided by a piece of data, which typically is the probability occurrence. For our purposes, entropy
shall be defined as "the amount of uncertainty that an attacker have about the location of a given object". Shannon entropy is formally defined as:
\[ H(X) = - \sum_{x \in X} p(x) \log_2 p(x) \]
The resulting value is a good measure of the dispersion or surprise, but it must be interpreted with caution [35]. In most cases, it is an accurate estimation of the cost of an attack, but only if it is a uniform distribution. It is especially problematic for those distributions with a high kurtosis, because the attacker can focus on a small range of values, thereby building faster attacks.
---
**Figure 10.** Statistical information produced by ASLRA for the Thread stack on PaX i386.
Entropy estimation is a challenging issue [36]. In order to obtain accurate results, it is necessary to have very large data sets or alternatively make assumptions about the underlying distribution. The sampler part of the ASLRA tool is a simple application using `malloc()`, `mmap()`, etc., and has been optimized to generate millions of samples in a few minutes to avoid biased estimations for most distributions. If single variable entropy is a challenging task, measure the correlation between objects and properly estimate the conditional entropy requires a huge set of samples. We have addressed the issue by assuming that the relation between objects are defined by a sum of a uniform random variable or a constant value. Therefore, the conditional entropy is calculated as the entropy of the difference of the objects. Table 2 shows the memory objects that the developed ASLRA tool can collect.
Table 2. Objects analyzed by the ASLRA tool.
<table>
<thead>
<tr>
<th>Object</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Arguments</strong></td>
<td>The arguments received by <code>main()</code> and the environment variables of the process.</td>
</tr>
<tr>
<td><strong>HEAP</strong></td>
<td>The initial heap location as returned by <code>brk()</code></td>
</tr>
<tr>
<td><strong>Main Stack</strong></td>
<td>The stack of the process, that is the address of a local variable of the <code>main()</code> function.</td>
</tr>
<tr>
<td><strong>Dynamic Loader</strong></td>
<td>For dynamic executables, the address of <code>ld.so</code>.</td>
</tr>
<tr>
<td><strong>vDSO</strong></td>
<td>Linux specific object exporting services like for example the syscall mechanism.</td>
</tr>
<tr>
<td><strong>Glibc</strong></td>
<td>The standard C library used by the majority of processes.</td>
</tr>
<tr>
<td><strong>Thread Stack</strong></td>
<td>The stack created by default then a new thread is created by the <code>libpthread.so</code> library.</td>
</tr>
<tr>
<td><strong>FIRST_CHILD_MAP</strong></td>
<td>The address returned by the first <code>mmap()</code> object of a child process.</td>
</tr>
<tr>
<td><strong>EXEC</strong></td>
<td>The address where the executable is loaded.</td>
</tr>
<tr>
<td><strong>MAP_HUGETLB</strong></td>
<td>Address of a large block (2 Mb) reserved via <code>mmap()</code> using the flag <code>MAP_HUGETLB</code>.</td>
</tr>
</tbody>
</table>
The resulting conditional entropy of the system is presented using a table and a graph that shows the absolute entropy and the coentropy. See Figure 13.

5. ASLR Weaknesses
In this section we describe four ASLR design weaknesses that affect current ASLR implementations. The result of these weakness are represented in Table 4 as a lack of randomization forms, and as a low absolute entropy in Table 5.
5.1. Non-Full Address Randomized Weakness
ASLR has been implemented by slightly shaking or moving randomly the base address of objects with respect to the classic layout, where the main stack is at the top, the executable is at the bottom, the heap follows the bss segment and the mmap zone is located in between the heap and the stack. Therefore, the entropy that can be applied to each object is limited by the range that they can be moved while preserving the previous sequence. This affects the higher bits of the addresses [6].
Another constraint that reduces entropy is the unnecessary alignment of some objects. Although alignment is mandatory in some objects (huge pages, executable, libraries, etc.), others like stacks, heap have sub-page. It may be possible to extend the sub-page to more objects by extending the semantic of the `mmap` syscall.
5.2. Non-Uniform Distribution Weakness
The distribution of objects along the range should be as uniform as possible. That is, all the addresses should have the same, or very similar, probability of occurrence; otherwise, it would be possible to speed up attacks by focusing on the most frequent (likely) addresses.
Figure 14 shows the output of the ASLRA (ASLR analyzer) tool for the libraries and mmap objects in PaX. The distributions of these objects in i386 follows a triangular distribution and on x86_64 an Irwin-Hall with $n = 3$. In Linux, the heap is the result of the sum of two random values, but since one of them is much larger than the other, the impact on the distribution is almost negligible.

(a) On a i386 system
(b) On a x86_64 system
**Figure 14.** Statistical distribution of mmapped (libraries, thread stacks, anonymous maps, etc.) objects using PaX security mode.
5.3. Correlation Weakness
Attacks launched to bypass ASLR are becoming more and more sophisticated; for instance, instead of attacking an object directly, the attacker can first de-randomize an object with low entropy, and then use it to de-randomize the target object (the object which contains the required gadgets or data). The idea that an object’s memory address leak can be used to exploit another one was first demonstrated by Marco-Gisbert et al. [27] through the offset2lib weakness. In that case, the executable was de-randomized using a byte-for-byte attack [37] and then the libraries zone was calculated, resulting in a very fast bypass of ASLR. This weakness was analyzed by Herlands, W at al. [38], and referred as EffH (Effective Entropy).
In Linux and PaX, the heap and the executable are separated from each other by a random value. A leak in the heap area does not only compromises the heap, but it also reveals information about the executable, because there is less entropy distance from the executable to the heap (correlation entropy) than the absolute entropy of the executable. Huge pages and the objects in the mmap_area are also correlated. Since huge pages have the largest alignment, they have the lowest entropy, and attackers can build correlation attacks by de-randomizing huge pages first and then later the libraries zones. For example, in PaX i386, instead of attacking the libraries directly (16 bits), attackers can de-randomize huge pages (6 bits) and later de-randomize the libraries zone from the huge page zones (10 bits). This two-step attack takes $1088 = 2^{10} + 2^6 \approx 2^{10}$ attempts, instead of the $65,536 = 2^{16}$ tries required to directly attack the library.
All mmapped objects are located together in the mmap_area, which results in total correlation between them. From a security point of view, this is a weak design. MILS (Multiple Independent Levels of Security/Safety) [39] criteria state that objects of different criticality levels must be isolated. Google Chrome, for instance, is aware of this security issue and has addressed it by implementing some form of user-land ASLR to map JIT (just-in-time compiled code) objects in its own zone, which is separated from the default mmap_area. Note that JIT objects are an appealing target for attackers [40].
5.4. Memory Layout Inheritance Weakness
All child processes inherit/share the memory layout of the parent. This is the expected behavior, since children are an exact copy of the parent’s memory layout. Unfortunately, though, from a security point of view, this is not a desirable behavior because although new objects belong only to the child process, their addresses can be guessed easily by parents and siblings. This problem is especially dangerous on networking servers which use the forking model. In Android, for instance, the situation is even worse, because all of the applications are children of Zygote, and although the siblings might not call the same mapping sequence, a malicious sibling can predict future mmaps of any other. Therefore, the leakage of an object in the library or mmap area exposes all objects in these areas (correlation weakness) and also allows one to predict where future mmaps will be placed—even between siblings (inheritance weakness).
This issue is widely known. The solution used by the SSHD suite consists on relaunching (fork + exec) the process for every incoming connection, rather than creating a direct fork process. In this way, not only the new maps are different among siblings, but all the maps are different. Lee et al. [41] proposed to use the same solution to replace the application creation model of Zygote by a pool of pre-exec processes, this new model was called Morula.
6. ASLR Constraints and Considerations
The straightforward solution to solving most of the previous weaknesses is to randomize each object independently over the full VM range. Although this idea is quite intuitive [12], multiple practical issues must be addressed properly, in order to achieve a realistic ASLR design. ASLR-NG has been designed by taking into account the following issues:
- **Fragmentation:** although, from the point of view of security, having objects spread all over the full VM space is the best choice, in some cases it introduces prohibitive fragmentation, which is especially severe in 32-bit systems. Applications that request large objects or make a lot of requests may fail randomly, so it is mandatory to have a mechanism to address this fragmentation.
- **Page table footprint:** a very important aspect that is underestimated is the size of the process page table, because the more the objects are spread, the bigger the page table. This is particularly important in systems with low memory or with a high number of processes. Since each application could have a different level of security, the ASLR design should allow for tuning the page table size versus object spreading.
- **Growable areas:** unfortunately, most applications still use growable areas in some objects. In order to be compatible with these applications, an ASLR must guarantee some form of compatible behavior.
- **Homogeneous entropy:** all objects should have the same amount of entropy, in particular objects of the same type (for example, stacks); otherwise, attackers will focus on the weakest link. Unfortunately, none of the current designs meets this requirement.
- **Uniformly distributed:** all objects should be uniformly distributed; otherwise, attackers can design more effective attacks by focusing on the most frequent addresses.
- **ASLR compatibility:** the ASLR design should be backward-compatible with existing applications. That is, if there is a trade off between security and compatibility, then the design should allow for tuning the application framework to meet application’s needs.
7. ASLR-NG: Address Space Layout Randomization Next Generation
This section describes the proposed ASLR-NG, which addresses all the weaknesses identified in Section 5 as well as all of the constraints and considerations presented in Section 6. ASLR-NG does not divide the virtual memory region enabling to load any object (stack, heap, libraries, etc.) at any address without any restriction. In order to achieve this, ASLR-NG limits and pre-reserve (no actual memory is being allocated), all growable objects (main stack and heap).
When those objects need to grow, they will consume more part of the pre-reserved memory until they reach the limit. The pre-reserved memory area can be seen as a memory belonging to a particular object where others objects can not be allocated.
As presented in Section 3, both the stack and the heap are growable but limited, which makes ASLR-NG a realistic and practical ASLR. ASLR-NG uses those limits to create a pre-reserved memory for each growable object. This ensures compatibility with applications since by adjusting those limits. In fact by pre-reserving memory, ASLR-NG prevents accidental mappings and collisions. For example in current implementations it is possible to \texttt{mmap()} an object very close to the stack resulting in a collision when the stack grows but this is not possible with ASLR-NG since only the stack can allocate memory to its pre-reserved area and the \texttt{mmap()} will fail.
7.1. Allocating Object Strategy
Two methods are available to allocate an object in ASLR-NG: Isolated and in an specific-zone.
- **Isolated:** the object is independently randomized using the full virtual memory space of the process. Unlike current implementations, ASLR-NG can use the full VM range to allocate an object, and as a result there no order to the objects and it prevents any kind of correlation attack.
- **Specific-zone:** objects of the same class are mapped together and isolated from others. A specific-zone is defined by a base address and a direction flag, both of which are initialized when the specific-zone is created (see function \texttt{new_zone()} in Listing 1). The base address is a random value taken from the full VM space, and new objects are placed by following the direction flag (toward higher or lower addresses) with respect to the base address.
The main benefit of using specific-zones is that it reduces both fragmentation and page table footprint, which makes ASLR practical and realistic. Furthermore, specific-zones can be created according to MILS criteria, in that objects of the same criticality level may be grouped together. Criticality depends, among other factors, on the permissions and the kind of data stored on the object. Following this rule, ASLR-NG defines five specific-zones (depending on the configuration, see profile modes below):
- **Huge pages:** placing all huge page objects in their own specific-zone removes the correlation weakness between huge pages and normal mmapped objects. This is a specially dangerous form of correlation weakness as described in Section 8.3.
- **Thread stacks:** following the same criteria as the main stack, the thread stacks are isolated from the rest of objects on their own specific-zone.
- **Read-write-exec objects:** although these types of object are seldom used, for example in JIT mapping, they are very sensitive; in fact, Google implements custom randomization in their Chromium browser for these objects as part of its sand-boxing framework.
- **Executable objects:** map requests with executable permission are grouped in a specific-zone. This zone is mainly used to group library codes.
- **Default zone:** any other objects that do not match previous categories are allocated to this specific-zone. In addition, applications can create custom specific-zones to isolate sensitive data. For example, the credentials or certificates of a web server can be isolated from the rest of the regular data. This mechanism can prevent a Heartbleed [42] attack by moving sensitive data (certificates) away from the vulnerable buffer.
7.2. Addressing Fragmentation
When virtual memory size is small, fragmentation is an issue, because the more objects that are independently randomized, the more fragmented the memory. In dynamic memory, the fragmentation problem is defined as [43] ”the inability to reuse memory that is free.”
There is no simple way to measure fragmentation, but the worst case depend on: (1) the number of objects already allocated, (2) their size, (3) the relative position of each one and (4) the size of the
new request. If all objects, \( n \), are independently randomized, the worst case occurs when the allocated objects are of one page size and they are evenly distributed along the whole memory space. In this case, the maximum guaranteed size is approximated by:
\[
\text{new_obj_size} \lesssim \frac{\text{VM_SIZE}}{n + 1}
\]
On x86_64 fragmentation is not an issue because of the very large number of mapped objects needed to cause an error. For example, a 1GB memory request will not fail until \( 2^{17} = 131,072 \) objects have been mapped.
On the other hand, fragmentation is a real problem in 32-bit systems. For example, a memory request of 25MB is not guaranteed after just 122 requests (of page size), while a request for 256 MB may fail after mapping just 12 objects, including the stack, vDSO, executable, heap, each library, etc. Therefore, it is not practical to randomize each object independently in 32-bit systems, without addressing the fragmentation issue.
ASLR-NG addresses this issue by reserving a range of virtual space, the amount of which is specified as a percentage of the available VM size. When a requested object does not fit into the non-reserved space, the allocation algorithm automatically uses the reserved space, without degrading the entropy of these objects and regardless of their size.
Figure 15 shows the result of allocating multiple objects in ASLR-NG. Objects 1 and 3 fit into the non-reserved area, and so they are placed there, but for objects 2 and 4, there are no free gaps to hold them on the non-reserved area. In this case, the algorithm performs a top-down, first-fit strategy. Note that objects allocated in the reserved area will ‘inherit’ the entropy of the lowest object in the non-reserved area.
Although reserving a percentage of the VM will reduce the range for available randomization, ASLR-NG uses a novel strategy to regain lost entropy, whereby the reserved area is randomly placed at the top or the bottom of the virtual memory space. For example, by reserving 50%, an attacker cannot know on which side (top or bottom) the objects will be located, which forces them to consider the whole VM space. As a result, there is no entropy penalty with this strategy. Only when the reserved area is larger than 50%, is there a small amount of entropy degradation. The expression which relates the loss of entropy to the percentage of reserved area is:
where \( x \) is the percentage of the reserved area, and \( f(x) \) gives the number of bits that have to be subtracted from the total VM space entropy. For example, reserving 50% on an i386, the largest guaranteed object is 1.5 GB and entropy is not reduced. If 2/3 of the VM space is reserved, then it is possible to allocate an object up to 2GB in size, and at the cost of reducing entropy by only 0.5 bits. Therefore, the ASLR-NG design has both more entropy and less fragmentation.
7.3. Algorithm
When a process is created, the area reserved to avoid fragmentation is defined by setting the variables min\_ASLR and max\_ASLR. This is the range that will be used to allocate objects (allocation area).
The direction of a specific-zone is a random bit with a probability of pointing towards the middle of the allocation range inversely proportional to the distance of the base address to the middle—the expression is in Listing 1. In other words, if the base address is close to the border of the allocating range, then the direction is more likely to point toward the other side of the range. This way, objects will not accumulate at the borders of the allocation area.
**Listing 1.** ASLR-NG initialization pseudo-code.
```plaintext
new_zone(low, high, zone) {
zone.base = randomize_range(low, high);
zone.direction = randomize_range(low, high) < zone.base ? TOPDOWN : DOWNTOP;
}
do_exec(){
...
reserved = VM_SIZE * percentage_reserved / 100;
min_ASLR = reserved * (rand() % 2);
max_ASLR = min_ASLR + VM_SIZE - reserved;
new_zone(min_ASLR, max_ASLR, mmap_base);
new_zone(min_ASLR, max_ASLR, huge_pages);
new_zone(min_ASLR, max_ASLR, thread_stacks);
...
}
```
A detailed analysis of the distribution of the objects at the borders of the allocation area is beyond the remit of this paper, but for now we can say that the presented algorithm to determine the direction gives a fair distribution along the whole range, with no accumulation areas (addresses with higher probability), regardless of the number of objects in the zone and the workload mix.
The algorithm employed to allocate an object works by first selecting a value as a hint address, in order to place the object, and then to look for a free gap in which to actually place the object. The algorithm is as follows:
1. Obtain the hint address and the direction:
- if it is to a specific-zone, then the hint address and the direction are the ones from the specific-zone.
- if it is an isolated object, then the hint address is a random value from the allocation range [min\_ASLR, max\_ASLR] and the direction is top-down.
2. Look for a gap large enough to hold the request from the hint address to the limit of the allocation area determined by the direction. If found, then succeed.
3. Look for a gap large enough to hold the request from the hint address to the limit of the allocation area determined by the reverse direction. If found, then succeed.
4. Look for a gap large enough to hold the request from the full VM space, starting from the allocation area and working towards the reserved area. If found, then succeed.
5. Out of memory error.
Even if there is no reserved area, step 4 is necessary to guarantee that the whole virtual memory is covered properly. For example, as illustrated in the Figure 16d, the gaps [ld.so ↔ mmap_base] and [mmap_base ↔ vDSO] are not suitable for a large request, but the gap [ld.so ↔ vDSO] can be used if a global search is done.
Figure 16. ASLR-NG: Profile mode examples.
7.4. Profile Modes
The ASLR-NG design provides two possibilities for allocating each object: isolated or in a specific-zone. From a security point of view, the more isolated objects, the better, but there are multiple side effects that should be carefully considered and balanced, as described in Section 6. In order to simplify the configuration of ASLR-NG, we provide four different working modes or profiles. Each mode randomizes each object using the isolated or the specific-zone method. The four modes are summarized in Table 3, and a representative example of each one is sketched in Figure 16. Next is the design rationale for each mode:
**Mode 1—Concentrated:** all objects are allocated in a single specific-zone, which results in a compact layout. The number of entropy bits is not degraded but only the correlation entropy between them. In other words, the cost (if brute force were used) to obtain the address of an object is not reduced by using this mode. The goal is to reduce the footprint of the page table.
**Mode 2—Conservative:** this mode is equivalent to that used in Linux and PaX. The main stack, the executable and the heap are independently randomized, while the rest (libraries and mmaps) are allocated in the mmap specific-zone. Since the objects are randomized using the full allocation range, ordering is not preserved; for example, the stack may be below the executable.
**Mode 3—Extended:** this is an extension of the conservative mode, with additional randomization forms: (1) specific-zones for sensitive objects (thread stacks, heap, huge pages, read-write-exec and only executable objects); (2) sub-page randomization of the heap and thread stacks and (3) per-fork randomization. This can be considered a very secure configuration mode.
which addresses most of the weaknesses and sets a reasonable balance between security and performance. Therefore, this should be the default mode on most systems.
**Mode 4—Paranoid:** every object is independently randomized, and no specific-zones are used. As a result, there is no correlation between any objects, which could even prevent future sophisticated attacks. It is intended to be used on processes that are highly exposed, for example networking servers, but should be carefully used when applied globally to all system processes because of additional memory overheads.
**Table 3. ASLR-NG mode definition.**
<table>
<thead>
<tr>
<th>Feature</th>
<th>Modes</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sub-page in ARGV</td>
<td>✓</td>
</tr>
<tr>
<td>Randomize direction</td>
<td>✓</td>
</tr>
<tr>
<td>Bit-slicing</td>
<td>✓</td>
</tr>
<tr>
<td>Isolate stack, executable and heap</td>
<td>✓</td>
</tr>
<tr>
<td>Specific-zone for huge pages</td>
<td>✓</td>
</tr>
<tr>
<td>Randomize specific-zones per child</td>
<td>✓</td>
</tr>
<tr>
<td>Sub-page in heap and thread stacks</td>
<td>✓</td>
</tr>
<tr>
<td>Specific-zone for thread stacks</td>
<td>✓</td>
</tr>
<tr>
<td>Specific-zone for read-write-exec objects</td>
<td>✓</td>
</tr>
<tr>
<td>Specific-zone for exec objects</td>
<td>✓</td>
</tr>
<tr>
<td>Isolate thread stacks</td>
<td>✓</td>
</tr>
<tr>
<td>Isolate LD and vDSO</td>
<td>✓</td>
</tr>
<tr>
<td>Isolate all objects</td>
<td>✓</td>
</tr>
</tbody>
</table>
7.5. Fine Grain Configuration
Each profile mode is defined by a set of features. Table 3 lists the ASLR-NG configuration options enabled on each mode.
- **Sub-page in ARGV:** ASLR-NG randomizes all the sub-page align bits. Although the arguments/environment are in the stack area, the page align bits of ARGV can be randomized.
- **Randomize direction:** the direction of a specific-zone is re-randomized for every new allocation. As a result, even libraries that typically are loaded sequentially will have some degree of randomness, which is especially useful in the concentrated profile, because it shuffles objects.
- **Specific-zone for huge pages:** if enabled, ASLR-NG uses a different specific-zone to map huge pages and therefore huge pages are completely isolated and correlation attacks abusing of its low entropy are not longer possible.
- **Specific-zone for thread stacks:** If enabled, thread stacks are allocated in a designated specific-zone. This no only prevents correlation attacks but also separate its data content from the libraries since both are by default in the same area.
- **Inter-Object to Stack, Executable and Heap:** each one of these objects is independently randomized, which is the default behavior for Linux and PaX. It was added to support the concentrated mode by disabling it.
- **Randomize specific-zones per child:** When a new child is spawned, all specific-zones are renewed, which results in a different memory map between the parent and the child, as well as any siblings among them.
- **Sub-page in heap and thread stacks:** applies sub-page randomization to the thread stacks and the heap. This feature can also be used from user-land on a per object basis, by calling the `mmap()` with the new flag `MAP_INTRA_PAGE`.
- **Isolate thread stacks:** randomizes thread stacks individually. This feature can also be requested by using the `MAP_RND_OBJECT` flag when calling `mmap()`.
• Isolate LD and vDSO: by enabling this feature, ASLR-NG loads these objects individually instead of using the classic library/mmap zone.
• Bit-slicing: enabling this feature, ASLR-NG generates a random number at boot time which is later used to improve the entropy of some objects when they must be aligned, typically for cache aliasing performance. Instead of setting the sensitive bits to zero, they are set to the random value generated at boot. We have used the core idea of this novel randomization form to address a security issue in the Linux kernel 4.1, to increase entropy by 3 bits in the AMD Bulldozer processor family [31].
• Isolate all objects: all objects are independently randomized. The leakage of any object cannot be used to de-randomize any other. This feature can be used in very exposed or critical environments where security is paramount.
8. Evaluation
This section compares ASLR-NG with Linux, PaX and OS X. Firstly, Section 8.1 compares the main randomization forms to identify the new features introduced by the ASLR-NG. In Section 8.2 the entropy bits for 32 and 64 bits in the x86 architecture are compared, and finally the correlation entropy of the objects is presented.
8.1. Randomization Forms
Linux and PaX provide very few randomization forms, and furthermore they do not generalize them either. For example, they do not provide sub-page or inter-object randomization for thread stacks. ASLR-NG extends already used forms of entropy to most objects and provides new forms to prevent correlation attacks [27]. It worth mentioning the concept of specific-zones, which is a simple mechanism employed to group together sensitive objects and isolate them from the rest. Table 4 summarizes the main features of Linux, PaX and ASLR-NG.
<table>
<thead>
<tr>
<th>Feature and Forms</th>
<th>OS X</th>
<th>Linux</th>
<th>PaX</th>
<th>ASLR-NG</th>
</tr>
</thead>
<tbody>
<tr>
<td>ASLR per-exec</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Inter-object in stack, exec. and heap</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Sub-page in main stack</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Sub-page in ARGV and heap (brk)</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Inter-object in LD and vDSO</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Inter-object in thread stacks</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Sub-page in thread stacks</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Load libraries order randomized</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Multiple specific-zone support</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Randomize specific-zones per child</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Bit-slicing randomization</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Sub-page per mmap request</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Inter-object per mmap request</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Uniform distribution</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Full VM range</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
</tbody>
</table>
8.2. Absolute Address Entropy
Absolute entropy is the effective entropy of an object when it is considered independently. Each ASLR has been tested in two different systems:
• **32-bits**: a 32-bit x86 architecture, without PAE. Note that when an i386 application is executed in a x86_64 system, the memory layout is different. Our experiments are executed in a truly 32-bit system, and so the virtual memory space available to any process is 3 GB.
• **64-bits**: a 64-bit x86_64 architecture. The virtual memory space available for the user is $2^{47}$ bytes.
Table 5 shows the measured entropy bits obtained in Linux, PaX, OS X and ASLR-NG in both 32- and 64-bit systems. Note that the ASLR-NG absolute entropy is the same for all modes. All the data presented in this section are the result of running the sampler tool to collect a million samples for each system which is more than enough for the virtual memory space. ARGV is the page in memory that hold the program arguments.
Table 5. Comparative summary of bits of entropy.
<table>
<thead>
<tr>
<th>Object</th>
<th>32-Bits</th>
<th>64-Bits</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>OS X</td>
<td>Linux</td>
</tr>
<tr>
<td>ARGV</td>
<td>8</td>
<td>11</td>
</tr>
<tr>
<td>Main stack</td>
<td>8</td>
<td>19</td>
</tr>
<tr>
<td>Heap (brk)</td>
<td>8.7</td>
<td>13</td>
</tr>
<tr>
<td>Heap (mmap)</td>
<td>7.7</td>
<td>8</td>
</tr>
<tr>
<td>Thread stacks</td>
<td>11</td>
<td>8</td>
</tr>
<tr>
<td>Sub-page object</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Regular mmaps</td>
<td>7.7</td>
<td>8</td>
</tr>
<tr>
<td>Libraries</td>
<td>7.7</td>
<td>8</td>
</tr>
<tr>
<td>vDSO</td>
<td>7.7</td>
<td>8</td>
</tr>
<tr>
<td>Executable</td>
<td>8</td>
<td>8</td>
</tr>
<tr>
<td>Huge pages</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
**Linux:** In 32-bit systems, Linux provides only 8 random bits for most objects, which is too low a value to be effective and can be considered defeated. In 64 bits, although randomization is higher for most objects, there are still some objects (vDSO and ARGV) with much lower entropy, which in turn may encourage attackers to use them.
Huge pages are less randomized, due to alignment constraints. In particular, in 32-bit systems, alignment resets those bits that ASLR randomizes, and so huge pages are not randomized at all. Moreover, in 64-bit systems, huge pages have 19 random bits, which gives some protection but still may not deter local or remotely distributed attackers.
**PaX/Grsecurity:** In 32 bits, PaX provides much more entropy than Linux in all objects. The libraries and mmapped objects have 15.72 bits of entropy, in which case a brute force attack, at 100 trials per second, will need a few minutes to bypass the PaX ASLR. The lowest randomized object (but huge pages) is the executable. Surprisingly in 64-bit systems, the entropy of Executables in PaX is less than the entropy in Linux. The additional entropy bits of the ARGV, main stack and heap are due to sub-page randomization. The decimal values of the mmapped objects are caused by non-uniform distribution—as explained in Section 5.2. PaX is much better than Linux in 32 bits, but quite similar in 64 bits.
**OS X:** ASLR is broken in OS X for local attackers for both 32- and 64-bit systems and is weak for remote attackers. As Table 4 shows, OS X implements its ASLR per boot. That is, all objects except the executable, are only randomized after the system is rebooted. Therefore it provides no protection against local attackers and its ASLR in both 32 and 64-bits must be considered broken for local attackers. The OS X entropy showed in Table 5 only apply for remote attackers. In 32-bit systems it provides a similar protection than Linux but in 64-bit the entropy provided is very low allowing potential attackers to bypass the OS X ASLR with little effort.
Note that because ASLR is applied only when the system is rebooted, remote attackers can do a brute force attack against the libraries without requiring a forking server since the libraries will be always mapped to the same addresses. On average, to remotely bypass the ASLR in 64-bit OS X systems will require $2^{16} = 32,768$ attempts which clearly is not enough to deter attackers.
**ASLR-NG:** In 32 bits, libraries and mapped objects have almost 20 bits of entropy, which is comparable with the minimum randomized objects in 64-bit Linux (vDSo and ARGV). Because of the small VM space in 32 bits the entropy is intrinsically limited, but thanks to the ability of the ASLR-NG
to use the full address range to allocate any object, it increases entropy by up to 20 more bits than Linux and 12 more than PaX. Although ASLR-NG provides the highest randomization for huge pages, the alignment constraint (which resets the lowest 22 bits) only leaves the possibility of randomizing the highest 10 bits.
In 64 bits, ASLR-NG provides up to 15 more bits than Linux and 14 more than PaX. Regarding huge pages, Linux and PaX have 1 million possible places to load huge pages compared with the 67 million of ASLR-NG. This increment in entropy, jointly with the specific-zone for huge pages, increases the cost for an attacker to guess where they are placed and at the same time prevents using them in correlation attacks. Hence, ASLR-NG outperforms Linux and PaX ASLR in both 32- and 64-bit systems.
8.3. Correlation in ASLR-NG
ASLR-NG addresses correlation weakness by randomizing objects and specific-zones independently. Obviously, all the objects allocated in the same specific-zone are correlated together, but they are uncorrelated in relation to other specific-zones or objects.
The concentrated mode, by definition, is fully correlated to provide a compact layout to systems with low resources. The conservative mode is close to Linux and PaX but prevents using the stack, executable and heap in correlation attacks.
In extended mode, ASLR-NG extends the conservative mode by five specific-zones to isolate objects of different criticality levels. The paranoid mode goes a step beyond, though, by removing the correlation between all pairs of objects (no specific-zones are created), but as far as we know, exploiting the correlation between objects in the same category is not useful. Typically, a single library contains enough gadgets to build ROP exploits, and so it is not necessary to de-randomize other libraries.
9. Conclusions
In this paper we have analyzed the major operating system ASLR implementations to assess its effectiveness and its weakness from the local and remote attackers point of view, including the impact in the IoT devices based in Linux and OS X.
To do the assessment, we have proposed a taxonomy of all ASLR elements creating a categorization of three entropy dimensions. Based on this taxonomy we have created ASLRA, an advanced statistical analysis tool to automatically assess the effectiveness of any ASLR implementation.
Our analysis showed that all ASLR implementations suffer from several weaknesses, 32-bit systems provide a poor ASLR, and OS X has a broken ASLR in both 32- and 64-bit systems. This is jeopardizing not only servers and end users devices as smartphones but also the growing IoT ecosystem. We have then presented ASLR-NG, a novel ASLR that provides the maximum possible absolute entropy and removes all correlation attacks making ASLR-NG the best solution for both 32- and 64-bit systems. We implemented ASLR-NG in the Linux kernel 4.15 showing that ASLR-NG overcomes PaX, Linux and OS X implementations, providing strong protection to prevent attackers from abusing weak ASLRs.
Author Contributions: Writing—Original draft, H.M.-G. and I.R.R.
Funding: This research received no external funding.
Conflicts of Interest: The authors declare no conflict of interest.
References
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
|
{"Source-Url": "https://myresearchspace.uws.ac.uk/ws/portalfiles/portal/11894108/2019_07_15_Marco_et_al_Address_Final.pdf", "len_cl100k_base": 14818, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 63596, "total-output-tokens": 19273, "length": "2e13", "weborganizer": {"__label__adult": 0.0005011558532714844, "__label__art_design": 0.0004727840423583984, "__label__crime_law": 0.001972198486328125, "__label__education_jobs": 0.000553131103515625, "__label__entertainment": 0.00011456012725830078, "__label__fashion_beauty": 0.00020444393157958984, "__label__finance_business": 0.0003962516784667969, "__label__food_dining": 0.00033783912658691406, "__label__games": 0.00145721435546875, "__label__hardware": 0.003765106201171875, "__label__health": 0.0005650520324707031, "__label__history": 0.000392913818359375, "__label__home_hobbies": 0.00015056133270263672, "__label__industrial": 0.0007266998291015625, "__label__literature": 0.0003285408020019531, "__label__politics": 0.0004758834838867187, "__label__religion": 0.00047898292541503906, "__label__science_tech": 0.1761474609375, "__label__social_life": 0.00010013580322265624, "__label__software": 0.026641845703125, "__label__software_dev": 0.783203125, "__label__sports_fitness": 0.0002942085266113281, "__label__transportation": 0.000518798828125, "__label__travel": 0.0001958608627319336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 74505, 0.03114]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 74505, 0.7176]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 74505, 0.89943]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2884, false], [2884, 6962, null], [6962, 9974, null], [9974, 12404, null], [12404, 14644, null], [14644, 16107, null], [16107, 19894, null], [19894, 22494, null], [22494, 26419, null], [26419, 27107, null], [27107, 28010, null], [28010, 30635, null], [30635, 33874, null], [33874, 37960, null], [37960, 42019, null], [42019, 44433, null], [44433, 47204, null], [47204, 49718, null], [49718, 53041, null], [53041, 56572, null], [56572, 60913, null], [60913, 64668, null], [64668, 69574, null], [69574, 73922, null], [73922, 74505, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2884, true], [2884, 6962, null], [6962, 9974, null], [9974, 12404, null], [12404, 14644, null], [14644, 16107, null], [16107, 19894, null], [19894, 22494, null], [22494, 26419, null], [26419, 27107, null], [27107, 28010, null], [28010, 30635, null], [30635, 33874, null], [33874, 37960, null], [37960, 42019, null], [42019, 44433, null], [44433, 47204, null], [47204, 49718, null], [49718, 53041, null], [53041, 56572, null], [56572, 60913, null], [60913, 64668, null], [64668, 69574, null], [69574, 73922, null], [73922, 74505, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 74505, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 74505, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 74505, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 74505, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 74505, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 74505, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 74505, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 74505, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 74505, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 74505, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2884, 2], [2884, 6962, 3], [6962, 9974, 4], [9974, 12404, 5], [12404, 14644, 6], [14644, 16107, 7], [16107, 19894, 8], [19894, 22494, 9], [22494, 26419, 10], [26419, 27107, 11], [27107, 28010, 12], [28010, 30635, 13], [30635, 33874, 14], [33874, 37960, 15], [37960, 42019, 16], [42019, 44433, 17], [44433, 47204, 18], [47204, 49718, 19], [49718, 53041, 20], [53041, 56572, 21], [56572, 60913, 22], [60913, 64668, 23], [64668, 69574, 24], [69574, 73922, 25], [73922, 74505, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 74505, 0.23699]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
a79cb3f3874b473a153b8acd785f218c992616cc
|
On the Complexity of Mapping Linear Chain Applications onto Heterogeneous Platforms
Anne Benoit, Yves Robert, Eric Thierry
To cite this version:
HAL Id: hal-02102806
https://hal-lara.archives-ouvertes.fr/hal-02102806
Submitted on 17 Apr 2019
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
On the Complexity of Mapping Linear Chain Applications
onto Heterogeneous Platforms
Anne Benoit, Yves Robert and Eric Thierry
LIP, ENS Lyon, 46 Allée d’Italie, 69364 Lyon Cedex 07, France
UMR 5668 - Université de Lyon - CNRS - ENS Lyon - UCB Lyon - INRIA
October 2008
Abstract
In this paper, we explore the problem of mapping simple application patterns onto large-scale heterogeneous platforms. An important optimization criteria that should be considered in such a framework is the latency, or makespan, which measures the response time of the system in order to process one single data set entirely. We focus in this work on linear chain applications, which are representative of a broad class of real-life applications. For such applications, we can consider one-to-one mappings, in which each stage is mapped onto a single processor. However, in order to reduce the communication cost, it seems natural to group stages into intervals. The interval mapping problem can be solved in a straightforward way if the platform has homogeneous communications: the whole chain is grouped into a single interval, which in turn is mapped onto the fastest processor. But the problem becomes harder when considering a fully heterogeneous platform. Indeed, we prove the NP-completeness of this problem. Furthermore, we prove that neither the interval mapping problem nor the similar one-to-one mapping problem can be approximated by any constant factor (unless P=NP).
Key words: pipeline graphs, interval mappings, latency, makespan, complexity results, NP-hardness, approximation.
1 Introduction
Mapping applications onto parallel platforms is a difficult challenge. Several scheduling and load-balancing techniques have been developed for homogeneous architectures (see [8] for a survey) but the advent of heterogeneous clusters has rendered the mapping problem even more difficult.
In this context, a structured programming approach rules out many of the problems which the low-level parallel application developer is usually confronted to, such as deadlocks or process starvation. Moreover, many real applications draw from a range of well-known solution paradigms, such as pipelined or farmed computations. High-level approaches based on algorithmic skeletons [4, 6] identify such patterns and seek to make it easy for an application developer to tailor such a paradigm to a specific problem. A library of skeletons is provided to the programmer, who can rely on these already coded patterns to express the communication scheme within its own application. Moreover, the use of a particular skeleton carries with it considerable information about implied scheduling dependencies, which we believe can help address the complex problem of mapping a distributed application onto a heterogeneous platform.
In this paper, we consider applications that can be expressed as pipeline graphs. Typical applications include digital image processing, where images have to be processed in steady-state mode. A well known pipeline application of this type is for example JPEG encoding (http://www.jpeg.org/). In such workflow applications, a series of data sets (tasks) enter the input stage and progress from stage to stage until the final result is computed. Each stage has its own communication and computation requirements: it reads an input file from the previous stage, processes the data and outputs a result to the next stage. For each data set, initial data is input to the first stage, and final results are output from the last stage. One of the key metrics for such applications is the latency, i.e., the time elapsed between the beginning and the end of the execution of a given data set. Hence it measures the response time of the system to process the data set entirely.
Due to the possible local memory accesses, the rule of the game is always to map a given stage onto a single processor: we cannot process half of the tasks on a processor and the remaining tasks on another without exchanging intra-stage information, which might be costly and difficult to implement. In other words, a processor that is assigned a set of stages will execute the operations required by these stages (input, computation and output) for all the tasks fed into the pipeline. Communications are paid only if two consecutive stages are not mapped onto the same processor, and the latency is the sum of computation and communication costs over the whole pipeline.
The optimization problem can be stated informally as follows: which stage to assign to which processor in order to minimize the latency? There are several mapping strategies. The more restrictive mappings are one-to-one; in this case, each stage is assigned a different processor. Then the allocation function which assigns a processor to each stage is a one-to-one function, and there must be at least as many processors as application stages. Another strategy is very common for linear chains: we may decide to group consecutive stages onto a same processor, in order to avoid some costly communications. However, a processor is only processing an interval of consecutive stages. Such a mapping is called an interval mapping. Finally, we can consider general mappings, for which there is no constraint on the allocation function: each processor is assigned one or several stage intervals.
The problem of mapping pipeline skeletons onto parallel platforms has received some attention. In particular, Subhlok and Vondran [9, 10] have dealt with this problem on homogeneous platforms. In this paper, we extend their work and target heterogeneous platforms. Our main goal is to assess the additional complexity induced by the heterogeneity of processors and of communication links.
The rest of the paper is organized as follows. Section 2 is devoted to the presentation of the target optimization problems. Next in Section 3 we proceed to the complexity results, and in particular we prove the NP-completeness of the interval mapping problem. Furthermore, we prove that both this problem and the one-to-one mapping problem cannot be approximated by any constant factor (unless P=NP). Finally, we state some concluding remarks in Section 4.
2 Framework
2.1 Application Model
The application is expressed as a pipeline graph of $n$ stages $S_k$, $1 \leq k \leq n$, as illustrated on Figure 1. Consecutive data sets are fed into the pipeline and processed from stage to stage, until they exit the pipeline after the last stage. Each stage executes a task. More precisely, the $k$-th stage $S_k$ receives an input from the previous stage, of size $\delta_{k-1}$, performs a number of $w_k$ computations, and outputs data of size $\delta_k$ to the next stage. This operation corresponds to the $k$-th task and is repeated periodically on each data set. The first stage $S_1$ receives an input of size $\delta_0$ from the outside world, while the last stage $S_n$ returns the result, of size $\delta_n$, to the outside world.

2.2 Platform Model
We target a platform (see Figure 2), with $m$ processors $P_u$, $1 \leq u \leq p$, fully interconnected as a (virtual) clique. There is a bidirectional link $\text{link}_{u,v} : P_u \rightarrow P_v$ between any processor pair $P_u$ and $P_v$, of bandwidth $b_{u,v}$. Note that we do not need to have a physical link between any processor pair. Instead, we may have a switch, or even a path composed of several physical links, to interconnect $P_u$ and $P_v$; in the latter case we would retain the bandwidth of the slowest link in the path for the value of $b_{u,v}$. The speed of processor $P_u$ is denoted as $s_u$, and it takes $X/s_u$ time-units for $P_u$ to execute $X$ floating point operations. We also enforce a linear cost model for communications, hence it takes $X/b_{u,v}$ time-units to send (or receive) a message of size $X$ from $P_u$ to $P_v$.

Communication contention is taken care of by enforcing the one-port model \[2, 3\]. In this model, a given processor can be involved in a single communication at any time-step, either a send or a receive. However, independent communications between distinct processor pairs can take place simultaneously. The one-port model seems to fit the performance of some current MPI implementations, which serialize asynchronous MPI sends as soon as message sizes exceed a few megabytes \[7\].
Finally, we assume that two special additional processors \(P_{in}\) and \(P_{out}\) are devoted to input/output data. Initially, the input data for each task resides on \(P_{in}\), while all results must be returned to and stored in \(P_{out}\).
## 2.3 Mapping Problem
The general mapping problem consists in assigning application stages to platform processors. For simplicity, we could assume that each stage \(S_i\) of the application pipeline is mapped onto a distinct processor (which is possible only if \(n \leq p\)). However, such one-to-one mappings may be unduly restrictive, and a natural extension is to search for interval mappings, i.e., allocation functions where each participating processor is assigned an interval of consecutive stages. Intuitively, assigning several consecutive tasks to the same processor will increase its computational load, but may well dramatically decrease communication requirements. In fact, the best interval mapping may turn out to be a one-to-one mapping, or instead may enroll only a very small number of fast computing processors interconnected by high-speed links. Interval mappings constitute a natural and useful generalization of one-to-one mappings (not to speak of situations where \(m < n\), where interval mappings are mandatory), and such mappings have been studied by Subhlock et al. \[9, 10\].
Formally, we search for a partition of \([1..n]\) into \(m \leq p\) intervals \(I_j = [d_j, e_j]\) such that \(d_j \leq e_j\) for \(1 \leq j \leq m\), \(d_1 = 1\), \(d_{j+1} = e_j + 1\) for \(1 \leq j \leq m - 1\) and \(e_m = n\). The allocation function \(a(j)\) returns the index of the processor on which interval \(I_j\) is mapped. The optimization problem is to determine the mapping with the minimum latency, over all possible partitions into intervals, and over all processor assignments.
We assume that \(a(0) = in\) and \(a(m+1) = out\), where \(P_{in}\) is a special processor holding the initial data, and \(P_{out}\) is receiving the results. The latency is then obtained by summing all communication and computation costs for a given allocation of intervals to processors:
\[
\mathcal{L} = \frac{\delta_0}{b_{in,a(1)}} + \sum_{1 \leq j \leq m} \left\{ \frac{\sum_{i=d_j}^{e_j} w_i}{s_{a(j)}} + \frac{\delta_{e_j}}{b_{a(j),a(j+1)}} \right\}
\]
With no constraint on the allocation function, the mapping is general (or arbitrary): a processor is allowed to process several non consecutive intervals. In order to restrict to interval mappings, we request the allocation function to be one-to-one, so that all intervals are mapped on distinct processors:
\[
\forall 1 \leq j_1, j_2 \leq m \quad a(j_1) = a(j_2) \implies j_1 = j_2
\]
For one-to-one mappings, we add the additional constraint that each interval is reduced to exactly one stage \((d_j = e_j\) for \(1 \leq j \leq m\)).
3 Complexity Results
In this section, we summarize complexity results for the latency minimization problem, with the different mapping rules. We start by recalling results for platforms with homogeneous communication links (but different-speed processors), before tackling fully heterogeneous platforms.
3.1 Homogeneous Communications
When the target platform has identical communication links, i.e., \( b_{u,v} = b \) for \( 1 \leq u, v \leq p \), then latency can be minimized by grouping all application stages into a single interval, and mapping this interval onto the fastest processor, \( P_u \). This is clearly an optimal mapping:
- according to Equation (1), all computation costs are minimized thanks to the use of \( P_u \);
- the only communications are \( \delta_0 / b \) and \( \delta_n / b \), and these are included in any mapping.
For one-to-one mappings, all communication costs always are included, with a total cost \( \sum_{i=0}^{n} \delta_i / b \). Thus we need to minimize the total computation cost. This can be done greedily by assigning the stage with the largest computation cost to the fastest processor, and so on. A simple exchange argument proves the optimality of this mapping. Thus we have the following theorem:
**Theorem 1.** Minimizing the latency is polynomial on communication homogeneous platforms for one-to-one, interval and general mappings.
However, this line of reasoning does not hold anymore when communications become heterogeneous: indeed, the communication cost can hugely differ from one mapping to another, as shown in the next Section.
3.2 Heterogeneous Platforms
3.2.1 Motivating Example
Consider the problem of mapping the pipeline of Figure 3(a) on the heterogeneous platform of Figure 3(b). The pipeline consists of two stages, both needing the same amount of computation (**w** = 2), and the same amount of communications (**δ** = 100). In this example, a mapping which minimizes the latency must map each stage on a different processor, thus splitting the stages into two intervals. In fact, if we map the whole pipeline on a single processor, we achieve a latency of 100/100 + (2 + 2)/1 + 100/1 = 105, either if we choose \( P_1 \) or \( P_2 \) as target processor. Splitting the pipeline and hence mapping the first stage on \( P_1 \) and the second stage on \( P_2 \) requires to pay the communication between \( P_1 \) and \( P_2 \) but drastically decreases the latency: 100/100 + 2/1 + 100/100 + 2/1 + 100/100 = 1 + 2 + 1 + 2 + 1 = 7. This little example explains why minimizing the latency cannot be longer achieved by mapping all stages onto the fastest resource.
3.2.2 About One-to-One and General Mappings
**Theorem 2.** Minimizing the latency is NP-hard on heterogeneous platforms for one-to-one mappings.
This problem can be reduced from the Traveling Salesman Problem (TSP), which is NP-complete [5]. The proof can be found in [1].
Theorem 3. Minimizing the latency is polynomial on heterogeneous platforms for general mappings.
General mappings are less constrained since several stages can be arbitrarily grouped onto a same processor. In this case, the problem can be reduced to finding a shortest path in a graph, or solved by using dynamic programming. The proof using shortest paths can be found in [1], and we provide here a dynamic program to solve this problem.
Let \(L(i, u)\) be the minimum latency that can be achieved by mapping stage \(S_i\) onto processor \(P_u\). Then the recurrence can be written as:
\[
L(i, u) = \min_{v \neq u} \left\{ \begin{array}{ll}
L(i - 1, u) + \frac{w_i}{s_i} & \text{if } P_v = P_u \\
L(i - 1, v) + \frac{\delta_{i-1}}{b_{v,u}} + \frac{w_i}{s_u} & \text{otherwise}
\end{array} \right.
\]
Line (1) corresponds to the case in which stage \(i - 1\) is mapped on the same processor as stage \(i\), thus only the computation cost of \(S_i\) is added to the latency. Line (2) tries all other processors \(P_v \neq P_u\) and adds to the latency the corresponding communication cost (plus the computation cost for \(S_i\)).
The initialization must ensure that the correct communication cost will be paid as an input to \(S_1\). This can be done by forcing the virtual stage \(S_0\) to be mapped onto \(P_{in}\):
\[
L(0, u) = \begin{cases}
0 & \text{if } P_u = P_{in} \\
+\infty & \text{otherwise}
\end{cases}
\]
Finally, we need to compute
\[
\min_{1 \leq u \leq p} L(n, u) + \frac{\delta_n}{b_{u,out}}
\]
The complexity of this algorithm is \(O(n.p^2)\). The result mapping is general because nothing prevents to map non-consecutive stages onto a same processor. If we look for an interval mapping, we can modify the algorithm in order to keep track of the processors already used, for instance by marking processor \(P_u\) as “used” when switching to processor \(P_v\) in line (2) of the recurrence. However, such an information cannot be handled without having an algorithm of exponential complexity. Indeed, we prove below that the interval mapping problem is NP-hard.
3.2.3 NP-Completeness for Interval Mappings
**Definition 1.** LATENCY-INT-HET-DEC—Given a pipeline application, a target platform and a bound \( L \), does there exist an interval mapping of the pipeline on the platform with a latency which does not exceed \( L \)?
**Theorem 4.** LATENCY-INT-HET-DEC is NP-complete.
**Proof.** The problem clearly belongs to NP: given a mapping, we can check that constraint (2) is fulfilled (hence we do have an interval mapping), compute its latency using Equation (1), and then check that the bound \( L \) is respected, all of this in polynomial time.
To prove the completeness of the problem, we use a reduction from DISJOINT-CONNECTING-PATH (DCP), which is NP-complete [5]. Consider an arbitrary instance \( \mathcal{I}_1 \) of DCP, i.e., a graph \( G = (V, E) \) and a collection of \( k + 1 \) disjoint vertex pairs \((x_1, y_1), (x_2, y_2), \ldots, (x_{k+1}, y_{k+1}) \in V^2\). We ask whether \( G \) contains \( k + 1 \) mutually vertex-disjoint paths, one connecting \( x_i \) and \( y_i \) for each \( i, 1 \leq i \leq k + 1 \). The number of nodes in the graph is \( n = |V| \), and we have \( k \leq n \).
We build the following instance \( \mathcal{I}_2 \) of the latency minimization problem.
- The application consists in \( n(k+1) \) stages, whose computation costs are outlined below:
\[
\frac{w^k \ldots w^k}{n-2} \quad \frac{w^{2k}}{k} \quad \frac{w^{k-1} \ldots w^{k-1}}{n-2} \quad \frac{w^{2k-1}}{k} \quad \frac{w^{k-1}}{k-1} \quad \frac{\ldots}{\ldots} \quad \frac{w^{k+1}}{n-2} \quad \frac{\epsilon}{1} \quad \frac{\ldots}{\ldots} \quad \frac{1}{n}
\]
Formally, for \( 0 \leq i \leq k-1 \), the \( n - 2 \) stages \( in + 1 \) to \( in + n - 2 \) have a computation cost \( w^{k-i} \), stage \( in + n - 1 \) has a computation cost \( w^{2k-i} \), and stage \( in + n \) has a computation cost \( \epsilon^{k-i} \). Finally, the \( n \) last stages have a computation cost 1. The value of \( \epsilon \) is small, it is fixed to ensure that some slow processors can only process the stages of cost \( \epsilon \), for \( 1 \leq i \leq k \). Thus, \( \epsilon = \frac{1}{L+1} \), where \( L \) is the target latency which will be defined later. On the other hand, \( w \) is large: we fix \( w = n^3 \). All communication costs \( \delta_i \) are identical: \( \delta_i = \delta = \frac{1}{n+k+1} \).
- The target platform is composed of \( n + k \) processors, one corresponding to each vertex of the initial graph of DCP, and \( k \) additional processors \( z_i, 1 \leq i \leq k \).
Processor speeds are all equal to 1 except for processors \( z_i, 1 \leq i \leq k \), which are “fast” processors of speed \( w^{k-i+1} \), and processors \( x_i, 2 \leq i \leq k + 1 \), which are “slow” processors of speed \( \epsilon^{k-i+2} \).
The bandwidth between two processors \((n_i, n_j) \in V^2\) equals 1 if \((n_i, n_j) \in E\), and \( \frac{\delta}{L} \) otherwise (where \( L \) is the target latency). Thus, using a communication link which does not correspond to an edge in the entry graph \( G \) leads to a latency greater than \( L \). The entry processor \( P_{in} \) is connected to \( x_1 \) with a link of bandwidth 1, and all other links connecting \( P_{in} \) have a bandwidth \( \frac{\delta}{L} \). Similarly, the link from \( y_{k+1} \) to \( P_{out} \) has a bandwidth 1, and \( P_{out} \) cannot communicate with another processor without exceeding the bound on the latency, \( L \). Finally, the additional processors \( z_i \) are linked with a bandwidth 1 to two processors, \( y_i \) and \( x_{i+1} \), and with a bandwidth \( \frac{\delta}{L} \) to all remaining processors.
Figure 4 illustrates this platform.
- Finally we ask whether we can achieve a latency of \( L = 2n^2w^k \).
First, notice that the size of \( \mathcal{I}_2 \) is polynomial in the size of \( \mathcal{I}_1 \), since both \( n \) and \( k \) need to be encoded in unary in \( \mathcal{I}_1 \), and the number of stages and processors in \( \mathcal{I}_2 \) is polynomial in \( n \) and \( k \). The values are exponential, since the value of the parameters is bounded by \( w^{2k} \) and \( w = n^3 \), thus the parameters are bounded by \( n^{6k} \). However, \( n^{6k} \) can be encoded in binary in \( \mathcal{I}_2 \), thus in...
\( O(k \log n) \), and it remains polynomial in the size of \( \mathcal{I}_1 \). The same reasoning holds for the encoding of the \( \epsilon^k \).
Suppose first that \( \mathcal{I}_1 \) has a solution. We derive a mapping of latency smaller than \( L \) for \( \mathcal{I}_2 \). We denote by \( \ell_i \) the number of vertices in the path going from \( x_i \) to \( y_i \) in the solution of \( \mathcal{I}_1 \). All paths are disjoint, and they cannot include \( x_j \) or \( y_j \) with \( j \neq i \), so we have \( 2 \leq \ell_i \leq n - 2k \), for \( 1 \leq i \leq k + 1 \), and \( \sum_{i=1}^{k+1} \ell_i \leq n \).
We start by mapping the \( n - 2 \) first stages of the pipeline on the processors of the path from \( x_1 \) to \( y_1 \), assigning one stage per processor, and all the remaining ones to processor \( y_1 \). This is possible since there are more stages than processors in the path. The computation time required to traverse these stages is \( (n - 2).w^k \). Then, stage of cost \( w^{2k} \) is mapped on \( z_1 \) with a computation time of \( \frac{w^{2k}}{w^k} = w^k \). Finally, we map the following stage of cost \( \epsilon^k \) on processor \( x_2 \), with a computation time of \( \frac{\epsilon^k}{w^k} = 1 \). The total computation time for stages up to the one of cost \( \epsilon^k \) is thus less than \( (n-2).w^k + w^{k} + 1 \leq n.w^k \). Only fast communication links are used, since we start by \( x_1 \) which is connected to \( P_{in} \) and we evolve through edges of the original graph. The fast communication links are also used to access \( z_1 \), entering from \( y_1 \) and exiting through \( x_2 \).
Then we keep a similar mapping, for \( 2 \leq i \leq k \):
- the \( n - 2 \) first stages of cost \( w^{k-i+1} \) are mapped on the path between \( x_i \) and \( y_i \) (excluding \( x_i \), with a total cost \( (n-2).w^{k-i+1} \leq (n-2).w^k \);
- the stage of cost \( w^{2k-i} \) is mapped on processor \( z_i \), with a cost \( \frac{w^{2k-i}}{w^{k-i+1}} = w^k \);
- the stage of cost \( \epsilon^{k-i+1} \) is mapped on processor \( x_{i+1} \), thus achieving a cost of 1.
The total cost for these \( n \) stages is thus less than \( n.w^k \).
Finally, the remaining \( n \) stages of cost 1 are mapped on the path between \( x_{k+1} \) and \( y_{k+1} \), thus allowing to reach the output processor \( P_{out} \) with a good communication link. The cost for these stages is bounded by \( n \).
The total number of communications does not exceed \( n+k+1 \), since the platform is composed of \( n+k \) processors and the mapping is interval-based. Moreover, only fast communicating links (bandwidth 1) are used, and the cost of a single communication is thus \( \frac{1}{n+k+1} \). Thus the cost induced by communication is bounded by 1.
The latency of this mapping is then:
\[ L_{\text{mapping}} \leq \sum_{i=1}^{k} n.w^k + n + 1 \leq 2n^2 w^k \leq L \]
Indeed, \( k \leq n \) and since \( n \geq 2 \), \( n + 1 \leq n^2 w^k \).
The previous mapping is interval-based since we use the solution of \( I_1 \): the paths are disjoints so we do not reuse a processor after it has been handling an interval of stages. Thus, we found a valid solution to \( I_2 \).
Reciprocally, if \( I_2 \) has a solution, let us show that \( I_1 \) also has a solution. We prove that the mapping of \( I_2 \) has to be of a similar form than the mapping described above, and thus that there exists a disjoint path between \( x_i \) and \( y_i \), for \( 1 \leq i \leq k + 1 \).
First let us prove that the mapping must use processor \( z_1 \) to compute the stage of cost \( w^k \). Indeed, if this stage is not processed on \( z_1 \), the best we can do is to process it on the fastest remaining processor \( z_2 \), and the corresponding cost is
\[ \frac{w^2k}{w^{k-1}} = w^{k+1} = n^3 w^k > L \]
(we can assume \( n > 2 \)).
Since we must use \( z_1 \) and the mapping is interval-based, \( z_1 \) must have distinct predecessor and successor processors in the mapping. However, there are only two communication links that can be used in the mapping since the latency does not exceed \( L \). Thus, both processors \( y_1 \) and \( x_2 \) are used. The only stage that can be handled by \( x_2 \) is \( \epsilon^k \), because all other stages have a computation cost greater than \( \epsilon^{k-1} \) and thus would lead to a cost \( \frac{\epsilon^{k-1}}{\epsilon} = \frac{1}{\epsilon} = L + 1 \). Therefore, since \( z_1 \) must process the stage of cost \( w^k \) which is before the stage \( \epsilon^k \) in the mapping, \( x_2 \) is necessarily the successor of \( z_1 \) in the mapping.
In a similar way, we prove recursively, for \( i \geq 2 \), that each processor \( z_i \) is used in the mapping to compute the stage of cost \( w^{2k-i+1} \), and that \( x_{i+1} \) is its successor and it processes the stage of cost \( \epsilon^{k-i+1} \).
Let us suppose that this property is true for \( j < i \). By hypothesis, \( z_1, ..., z_{i-1} \) are already used to process stages preceding \( \epsilon^{k-i+2} \), and the mapping is interval-based, thus we cannot use these processors anymore. Thus, if processor \( z_i \) is not used for stage of cost \( w^{2k-i+1} \), the best we can do is to process this stage on the fastest remaining processor, which is \( z_{i+1} \). This leads to a cost of
\[ \frac{w^{2k-i+1}}{w^{k-i}} = w^{k+1} = n^3 w^k > L \]
Therefore, \( z_i \) is used to compute this stage. Thus, the mapping must use processor \( x_{i+1} \). The remaining stage with the lower computation cost is \( \epsilon^{k-i+1} \), since the smaller ones have already been assigned to \( x_j \), \( j < i + 1 \) (by hypothesis). This one produces a cost of exactly 1, while the second smaller is \( \epsilon^{k-i} \) and leads to a cost greater than \( L \). Thus the property is true for \( i \).
In order to ensure the latency of \( L \), the mapping is using only fast communicating links. Thus, processors \( x_1 \) and \( y_{k+1} \) are used in the mapping. All others \( y_i \) and \( x_{i+1} \), for \( 1 \leq i \leq k \), are used
because we must go through $z_i$. Therefore, processors must be visited in the following order:
$$x_1, y_1, z_1, x_2, \ldots, y_k, z_k, x_{k+1}, y_{k+1}.$$
Since it is an interval-based mapping, the processors used between $x_i$ and $y_i$ are all distinct, and they must be connected by edges in the graph by construction of $\mathcal{I}_2$, thus we found disjoint paths and a solution to $\mathcal{I}_1$.
### 3.2.4 Non-Approximability Results
**Theorem 5.** Given any constant $\lambda > 0$, there exists no $\lambda$-approximation to the Latency-Int-Het problem, unless $P = NP$.
**Proof.** Given $\lambda$, assume that there exists a $\lambda$-approximation to the Latency-Int-Het problem. Let $\mathcal{I}_1$ be an instance of DCP (see proof of Theorem 4). We build the same instance $\mathcal{I}_2$ as in the previous proof, except for:
- The speed of the fast processors $z_i$, $1 \leq i \leq k$. These are set to $(\lambda w)^{k-i+1}$ instead of $w^{k-i+1}$.
- The speed of the slow processors $x_i$, $2 \leq i \leq k+1$. These are set to $\lambda^{-(k-i+2)}\epsilon^{k-i+2}$ instead of $\epsilon^{k-i+2}$.
- The computation cost of each stage of cost $w^{2k-i+1}$, $1 \leq i \leq k$. Each of these costs are transformed to $(\lambda w)^{k-i+1}w^k$.
- The computation cost of each stage of cost $\epsilon^i$, $1 \leq i \leq k$. Each of these costs are transformed to $\lambda^{-i}\epsilon^i$.
We use the $\lambda$-approximation algorithm to solve this new instance $\mathcal{I}_2$ of our problem, which returns a mapping scheme of latency $L_{\text{alg}}$. We thus have $L_{\text{alg}} \leq \lambda L_{\text{opt}}$, where $L_{\text{opt}}$ is the optimal latency. Then we prove that we can solve DCP in polynomial time.
Let $L = 2n^2w^k$ be the bound on the latency used in the proof of Theorem 4.
- If $L_{\text{alg}} > \lambda L$, then $L_{\text{opt}} > L$ and there is no solution to DCP; otherwise, we could achieve a mapping of latency equal to $L$ with a mapping similar to the one described in the proof of Theorem 4.
- If $L_{\text{alg}} \leq \lambda L$, let us prove that DCP has a solution, by ensuring that the mapping has a similar structure than in the proof of Theorem 4.
First, we must map the stage of cost $w^{2k}$ on processor $z_1$; otherwise, the best we can do is to process it on the fastest remaining processor $z_2$, and the corresponding cost is $\frac{(\lambda w)^{k}w^k}{(\lambda w)^{k-1}} = \lambda w^{k+1} > \lambda L \geq L_{\text{alg}}$. Then, we show that the only stage handled by $x_2$ is the stage of cost $\lambda^{-k}\epsilon^k$. Other stages have a computation cost greater than $\lambda^{-(k-1)}\epsilon^{k-1}$ and would lead to a cost $\frac{\lambda^{-(k-1)}\epsilon^{k-1}}{\lambda^{-k}\epsilon^k} = \frac{\lambda}{\epsilon} = \lambda(L + 1) > \lambda L \geq L_{\text{alg}}$.
This line of reasoning can be kept recursively, similarly as in the proof of Theorem 4, thanks to the introduction of $\lambda$ in the costs. Therefore, the mapping is similar to the one described in the proof of Theorem 4, and we conclude that DCP has a solution.
Therefore, given a $\lambda$-approximation algorithm for \textsc{Latency-Int-Het}, we can answer to DCP in polynomial time, thus proving that $P = NP$. This contradicts our hypothesis and proves the non-approximability result.
We define by \textsc{Latency-One-to-One-Het} the problem of finding the one-to-one mapping which minimizes latency on a heterogeneous platform. We can also prove a non-approximability result for this problem.
**Theorem 6.** Given any constant $\lambda > 0$, there exists no $\lambda$-approximation to the \textsc{Latency-One-to-One-Het} problem, unless $P = NP$.
**Proof.** Given $\lambda$, assume that there exists a $\lambda$-approximation to the \textsc{Latency-One-to-One-Het} problem. Consider an arbitrary instance $I_1$ of the Hamiltonian Path problem HP, i.e., a complete graph $G = (V, E)$: is there an Hamiltonian path in $G$? This problem is known to be NP-complete [5]. We aim at showing that we can solve it in polynomial time by using the $\lambda$-approximation algorithm of \textsc{Latency-One-to-One-Het}.
We build the following instance $I_2$ of \textsc{Latency-One-to-One-Het}: we consider an application with $n = |V|$ identical stages. All application costs are unit costs: $w_i = \delta_i = 1$ for all $i$. For the platform, in addition to $P_{in}$ and $P_{out}$ we use $n$ identical processors of unit speed: $s_i = 1$ for all $i$. We simply write $i$ for the processor $P_i$ that corresponds to vertex $v_i \in V$. We only play with the link bandwidths: we interconnect $P_{in}$ and $P_{out}$ to all other processors with links of bandwidth 1. Also, if $(i,j) \in E$, then we interconnect $i$ and $j$ with a link of bandwidth 1. All the other links are slow: their bandwidth is set to $\frac{1}{\lambda(2n+2)}$. We ask whether we can achieve a latency not greater than $L = 2n + 1$. This transformation can clearly be done in polynomial time, and the size of $I_2$ is linear in the size of $I_1$.
Now we use the $\lambda$-approximation algorithm to solve this new instance $I_2$ of our problem, which returns a mapping scheme of latency $L_{alg}$. We thus have $L_{alg} \leq \lambda L_{opt}$, where $L_{opt}$ is the optimal latency. Then we prove that we can solve HP in polynomial time.
- If $L_{alg} > \lambda L$, then $L_{opt} > L$ and there is no solution to HP. Otherwise, let $v_1, ..., v_n$ be the hamiltonian path in $G$. If we map stage $S_i$ onto processor $i$, for all $i$, then we obtain a mapping of cost $2n + 1 = L$. Indeed, the total cost for computations is $n$, and only edges of the original graph, of bandwidth 1, are used in the mapping, thus adding a cost of $n + 1$ for communications. This contradicts the fact that $L_{opt} > L$. Therefore, HP has no solution.
- If $L_{alg} \leq \lambda L$, let us prove that HP has a solution. The mapping is not using any slow link, otherwise it would induce a cost $\lambda(2n+2) = \lambda(L+1) > \lambda L$, which contradicts our hypothesis. Since the mapping is one-to-one, all $n$ nodes must be used in the mapping, and it defines a hamiltonian path.
Thus, depending on the result of the algorithm, we can answer the HP problem, which proves that $P=NP$. This completes the proof.
\qed
4 Conclusion
In this paper, we have studied the problem of mapping linear chain applications onto large-scale heterogeneous platforms, focusing onto latency minimization. The latency measures the response time of the system in order to process one single data set entirely, and it is a key parameter for the user. When the platform has homogeneous communications, it is straightforward to provide the optimal mapping, be it one-to-one, interval-based or general.
When moving to fully heterogeneous platforms, the situation changes dramatically. Only the general mapping problem remains polynomial (and we provided a new proof of this result). While the one-to-one mapping problem was known to become NP-hard, that of the interval mapping problem was left open. The main result of this paper is to fill the gap and to prove its NP-hardness through quite an involved reduction. Furthermore, we prove that neither the interval mapping problem nor the one-to-one mapping problem can be approximated by any constant factor (unless P=NP). All these results constitute an important step in assessing the difficulty of the various mapping strategies that have been studied in the literature.
References
|
{"Source-Url": "https://hal-lara.archives-ouvertes.fr/hal-02102806/file/LIP-RR_2008-32.pdf", "len_cl100k_base": 9356, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 45180, "total-output-tokens": 10987, "length": "2e13", "weborganizer": {"__label__adult": 0.00034809112548828125, "__label__art_design": 0.000461578369140625, "__label__crime_law": 0.0004646778106689453, "__label__education_jobs": 0.0014848709106445312, "__label__entertainment": 0.0001499652862548828, "__label__fashion_beauty": 0.0002162456512451172, "__label__finance_business": 0.000553131103515625, "__label__food_dining": 0.0004467964172363281, "__label__games": 0.0008463859558105469, "__label__hardware": 0.0022869110107421875, "__label__health": 0.001224517822265625, "__label__history": 0.0005970001220703125, "__label__home_hobbies": 0.00018155574798583984, "__label__industrial": 0.0008053779602050781, "__label__literature": 0.0004062652587890625, "__label__politics": 0.0004119873046875, "__label__religion": 0.0007066726684570312, "__label__science_tech": 0.451904296875, "__label__social_life": 0.00013530254364013672, "__label__software": 0.0128173828125, "__label__software_dev": 0.52197265625, "__label__sports_fitness": 0.0003445148468017578, "__label__transportation": 0.0008749961853027344, "__label__travel": 0.0003006458282470703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37183, 0.02889]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37183, 0.47216]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37183, 0.88269]], "google_gemma-3-12b-it_contains_pii": [[0, 1018, false], [1018, 2648, null], [2648, 6801, null], [6801, 9012, null], [9012, 12353, null], [12353, 15268, null], [15268, 17356, null], [17356, 21675, null], [21675, 24478, null], [24478, 27789, null], [27789, 30888, null], [30888, 34099, null], [34099, 37183, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1018, true], [1018, 2648, null], [2648, 6801, null], [6801, 9012, null], [9012, 12353, null], [12353, 15268, null], [15268, 17356, null], [17356, 21675, null], [21675, 24478, null], [24478, 27789, null], [27789, 30888, null], [30888, 34099, null], [34099, 37183, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37183, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37183, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37183, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37183, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37183, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37183, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37183, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37183, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37183, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37183, null]], "pdf_page_numbers": [[0, 1018, 1], [1018, 2648, 2], [2648, 6801, 3], [6801, 9012, 4], [9012, 12353, 5], [12353, 15268, 6], [15268, 17356, 7], [17356, 21675, 8], [21675, 24478, 9], [24478, 27789, 10], [27789, 30888, 11], [30888, 34099, 12], [34099, 37183, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37183, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
5348558bba692594236b0461dd33ca039350a534
|
Using Destination-Passing Style to Compile a Functional Language into Efficient Low-Level Code
In submission
AMIR SHAIKHHA∗, EPFL
ANDREW FITZGIBBON, Microsoft Research
SIMON PEYTON-JONES, Microsoft Research
DIMITRIOS VYTINIOTIS, Microsoft Research
We show how to compile high-level functional array-processing programs, drawn from image processing and machine learning, into C code that runs as fast as hand-written C. The key idea is to transform the program to destination passing style, which in turn enables a highly-efficient stack-like memory allocation discipline.
ACM Reference format:
DOI: 10.1145/nnnnnn.nnnnnnn
1 INTRODUCTION
Applications in computer vision, robotics, and machine learning [32, 35?] may need to run in memory-constrained environments with strict latency requirements, and have high turnover of small-to-medium-sized arrays. For these applications the overhead of most general-purpose memory management, for example malloc/free, or of a garbage collector, is unacceptable, so programmers often implement custom memory management directly in C.
In this paper we propose a technique that automates a common custom memory-management technique, which we call destination passing style (DPS), as used in efficient C and Fortran libraries such as BLAS [? ]. We allow the programmer to code in a high-level functional style, while guaranteeing efficient stack allocation of all intermediate arrays. Fusion techniques for such languages are absolutely essential to eliminate intermediate arrays, and are well established. But fusion leaves behind an irreducible core of intermediate arrays that must exist to accommodate multiple or random-access consumers.
That is where DPS takes over. The key idea is that every function is given the storage in which to store its result. The caller of the function is responsible for allocating the destination storage, and deallocating it as soon as it is no longer needed. Of course this incurs a burden at the call site of computing the size of the callee result, but we will show how a surprisingly rich input language can nevertheless allow these computations to be done in negligible time. Our contributions are:
• We propose a new destination-passing style intermediate representation that captures a stack-like memory management discipline and ensures there are no leaks (Section 3). This is a good compiler intermediate language because we can perform transformations on it and be able to
∗This work was done while the author was at Microsoft Research, Cambridge.
© 2017 ACM. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in PACM Progr. Lang., http://dx.doi.org/10.1145/nnnnnn.nnnnnnn.
reason about how much memory a program will take. It also allows efficient C code generation with bump-allocation. Although it is folklore to compile functions in this style when the result size is known, we have not seen DPS used as an actual compiler intermediate language, despite the fact that DPS has been used for other purposes (c.f. Section 6).
- DPS requires to know at the call site how much memory a function will need. We design a carefully-restricted higher-order functional language, $\tilde{F}$ (Section 2), and a compositional shape translation (Section 3.3) that guarantee to compute the result size of any $\tilde{F}$ expression, either statically or at runtime, with no allocation, and a run-time cost independent of the data or its size (Section 3.6). We do not know any other language stack with these properties.
- We evaluate the runtime and memory performance of both micro-benchmarks and real-life computer vision and machine-learning workloads written in our high-level language and compiled to C via DPS (as shown in Section 5). We show that our approach gives performance comparable to, and sometimes better than, idiomatic C++.\footnote{Keen C++ programmers may wonder what advantage we anticipate over C++. Primarily this is the future possibility of program transformations such as automatic differentiation, which are easily expressed in $\tilde{F}$, but remain slow in C++ \cite{28}, and would be expected to generate code as efficient as our benchmarks indicate.}
2 $\tilde{F}$
$\tilde{F}$ is a subset of F#, an ML-like functional programming language (the syntax is slightly different from F# for presentation reasons). It is designed to be expressive enough to make it easy to write array-processing workloads, while simultaneously being restricted enough to allow it to be compiled to code that is as efficient as hand-written C, with very simple and efficient memory management. We are willing to sacrifice some expressiveness to achieve higher performance.
Fig. 1. The core $\tilde{F}$ syntax.
1Keen C++ programmers may wonder what advantage we anticipate over C++. Primarily this is the future possibility of program transformations such as automatic differentiation, which are easily expressed in $\tilde{F}$, but remain slow in C++ \cite{28}, and would be expected to generate code as efficient as our benchmarks indicate.
Using Destination-Passing Style to Compile a Functional Language into Efficient Low-Level Code
Typing Rules:
- **(T-If)** \( \frac{e_1 : \text{Bool}}{\text{if } e_1 \text{ then } e_2 \text{ else } e_3 : M} \)
- **(T-App)** \( \frac{e_0 : T \Rightarrow M \quad \tau : T}{e_0 \; \tau : M} \)
- **(T-Abs)** \( \frac{T \cup X : \overline{T} \vdash e : M}{\Gamma \vdash \lambda x : T \; e : M} \)
- **(T-Let)** \( \frac{T \vdash e_1 : T_1 \quad T, x : T_1 \vdash e_2 : T_2}{\Gamma \vdash \text{let } x = e_1 \text{ in } e_2 : T_2} \)
Scalar Function Constants:
- \( +, -, \times, / : \text{Num, Num} \Rightarrow \text{Num} \)
- \( \% : \text{Index, Index} \Rightarrow \text{Index} \)
- \( >, <, \geq, \leq : \text{Num, Num} \Rightarrow \text{Bool} \)
- \( &&, || : \text{Bool, Bool} \Rightarrow \text{Bool} \)
- \( ! : \text{Bool} \Rightarrow \text{Bool} \)
Vector Function Constants:
- \( \text{build} : \text{Card, (Index} \Rightarrow \text{M) \Rightarrow Array<M>} \)
- \( \text{reduce} : (\text{M, Index} \Rightarrow \text{M}), \text{M, Card} \Rightarrow \text{M} \)
- \( \text{length} : \text{Array<M>} \Rightarrow \text{Card} \)
- \( \text{get} : \text{Array<M>} \Rightarrow \text{M} \)
Syntactic Sugars:
- \( e_0[e_1] = \text{get } e_0 \; e_1 \)
- \( e_1 \oplus e_2 = \text{bop } e_1 \; e_2 \)
Fig. 2. The type system and built-in constants of \( \tilde{F} \)
2.1 Syntax and types of \( \tilde{F} \)
In addition to the usual \( \lambda \)-calculus constructs (abstraction, application, and variable access), \( \tilde{F} \) supports let binding and conditionals. In support of array programming, the language has several built-in functions defined: \texttt{build} for producing arrays; \texttt{reduce} for iteration for a particular number of times (from 0 to \( n-1 \)) while maintaining a state across iterations; \texttt{length} to get the size of an array; and \texttt{get} to index an array.
The syntax of \( \tilde{F} \) is shown in Figure 1, while the type system and several other built-in functions are shown in Figure 2. Note that Figure 1 shows an abstract syntax and parentheses can be used as necessary. Also, \( \pi \) and \( \bar{\pi} \) denote one or more variables and expressions, respectively, which are separated by spaces, whereas, \( \overline{T} \) represents one or more types which are separated by commas.
Although \( \tilde{F} \) is a higher-order functional language, it is carefully restricted in order to make it efficiently compilable:
- \( \tilde{F} \) does not support arbitrary recursion, hence is not Turing Complete. Instead one can use \texttt{build} and \texttt{reduce} for producing and iterating over arrays.
- The type system is monomorphic. The only polymorphic functions are the built-in functions of the language, such as \texttt{build} and \texttt{reduce}, which are best thought of as language constructs rather than first-class functions.
- An array, of type \texttt{Array<M>}, is one-dimensional but can be nested. If arrays are nested they are expected to be rectangular, which is enforced by defining the specific \texttt{Card} type for dimension of arrays, which is used as the type of the first parameter of the \texttt{build} function.
- No partial application is allowed as an expression in this language. Additionally, an abstraction cannot return a function value. These two restrictions are enforced by \texttt{(T-App)} and \texttt{(T-Abs)} typing rules, respectively (c.f. Figure 2).
As an example, Figure 3 shows a linear algebra library defined using \( \tilde{F} \). First, there are vector mapping operations (vectorMap and vectorMap2) which \texttt{build} vectors using the size of the input vectors. The \( i^{th} \) element (using a zero-based indexing system) of the output vector is the result of the application
of the given function to the $i^{th}$ element of the input vectors. Using the vector mapping operations, one can define vector addition, vector element-wise multiplication, and vector-scalar multiplication. Then, there are several vector operations which consume a given vector by reducing them. For example, vectorSum computes the sum of the elements of the given vector, which is used by the vectorDot and vectorNorm operations. Similarly, several matrix operations are defined using these vector operations. More specifically, matrix-matrix multiplication is defined in terms of vector dot product and matrix
transpose. Finally, vector outer product is defined in terms of matrix multiplication of the matrix form of the two input vectors.
2.2 Fusion
Consider this function, which accepts two vectors and returns the norm of the vector resulting from the addition of these two vectors.
\[ f = \lambda \text{vec1} \text{vec2}. \text{vectorNorm} (\text{vectorAdd vec1 vec2}) \]
Executing this program, as is, involves constructing two vectors in total: one intermediate vector which is the result of adding the two vectors \text{vec1} and \text{vec2}, and another intermediate vector which is used in the implementation of vectorNorm (vectorNorm invokes vectorDot, which invokes vectorEMul in order to perform the element-wise multiplication between two vectors). In this example one can remove the intermediate vectors by \textit{fusion} (or \textit{deforestation}) \cite{5, 10, 30, 39}. After fusion the function might look like this:
\[ f = \lambda \text{vec1} \text{vec2}. \text{reduce} (\lambda \text{sum idx}. \text{sum} + (\text{vec1}[\text{idx}]+\text{vec2}[\text{idx}]) * (\text{vec1}[\text{idx}]+\text{vec2}[\text{idx}])) 0 (\text{length vec1}) \]
This is \textit{much} better because it does not construct the intermediate vectors. Instead, the elements of the intermediate vectors are consumed as they are produced.
Fusion is well studied, and we take it for granted in this paper. However, there are plenty of cases in which the intermediate array cannot be removed. For example: the intermediate array is passed to a foreign library function; it is passed to a library function that is too big to inline; or it is consumed by multiple consumers, or by a consumer with a random (non-sequential) access pattern.
In these cases there are good reasons to build an intermediate array, but we want to allocate, fill, use, and de-allocate it extremely efficiently. In particular, we do not want to rely on a garbage collector.
3 DESTINATION-PASSING STYLE
Thus motivated, we define a new intermediate language, DPS-\(\tilde{F}\), in which memory allocation and deallocation is explicit. DPS-\(\tilde{F}\) uses \textit{destination-passing style}: every array-returning function receives as its first parameter a pointer to memory in which to store the result array. No function allocates the storage needed for its result; instead the responsibility of allocating and deallocating the output storage of a function is given to the caller of that function. Similarly, all the storage allocated inside a function can be deallocated as soon as the function returns its result.
Destination passing style is a standard programming idiom in C. For example, the C standard library procedures that return a string (e.g. \texttt{strcpy}) expect the caller to provide storage for the result. This gives the programmer full control over memory management for string values. Other languages have exploited destination-passing style during compilation \cite{15, 16}.
3.1 The DPS-\(\tilde{F}\) language
The syntax of DPS-\(\tilde{F}\) is shown in Figure 4, while its type system is in Figure 5. The main additional construct in this language is the one for allocating a particular amount of storage space \texttt{alloc} \(t1 (\lambda r. t2)\). In this construct \(t1\) is an expression that evaluates to the size (in bytes) that is required for storing the result of evaluating \(t2\). This storage is available in the lexical scope of the lambda parameter, \textit{and is deallocated outside this scope}. The previous example can be written in the following way in DPS-\(\tilde{F}\):
Each lambda abstraction typically takes an additional parameter which specifies the storage space that is used for its result. Furthermore, every application should be applied to an additional parameter which specifies the memory location of the return value in the case of an array-returning function. However, a scalar-returning function is applied to a dummy empty memory location, specified by \( \bullet \). In this example, the number of bytes allocated for the memory location \( r_2 \) is specified by the expression \((\text{vecBytes vec1})\) which computes the number of bytes of the array \text{vec1}.
3.2 Translation from \( \tilde{F} \) to DPS-\( \tilde{F} \)
We now turn our attention to the translation from \( \tilde{F} \) to DPS-\( \tilde{F} \). Before translating \( \tilde{F} \) expressions to their DPS form, the expressions should be transformed into a normal form similar to Administrative-Normal Form [7] (ANF). In this representation, each subexpression of an application is either a constant value or a variable. This greatly simplifies the translation rules, specially the (D-App) rule.\(^2\) The representation of our working example in ANF is as follows:
\[
\begin{align*}
f &= \lambda r_1 \text{vec1 vec2. } \text{alloc (vecBytes vec1)} (\lambda r_2, \\
& \quad \text{vectorNorm_dps } \bullet (\text{vectorAdd_dps } r_2 \text{ vec1 vec2)})
\end{align*}
\]
\(^2\) In a true ANF representation, every subexpression is a constant value or a variable, whereas in our case, we only care about the subexpressions of an application. Hence, our representation is almost ANF.
Typing Rules:
\[
\begin{align*}
\text{(T-Alloc)} & \quad \Gamma \vdash t_0 : \text{Card} \quad \Gamma, r : \text{Ref} \vdash t_1 : \text{M} \\
& \quad \text{alloc} \ t_0 \ (\text{Ref} \ t_1) : \text{M}
\end{align*}
\]
Vector Function Constants:
- \text{build} : \text{Ref, Card, (Ref, Index } \Rightarrow \text{ M, Card, (Card } \Rightarrow \text{ Shp } \Rightarrow \text{ Array<M>)}
- \text{reduce} : \text{Ref, (Ref, M, Index } \Rightarrow \text{ M, M, Card, (Shp, Card } \Rightarrow \text{ Shp, Shp, Card } \Rightarrow \text{ M)}
- \text{get} : \text{Ref, Array<M>, Index, Shp, Card } \Rightarrow \text{ M}
- \text{length} : \text{Ref, Array<M>, Shp } \Rightarrow \text{ Card}
- \text{copy} : \text{Ref, Array<M>, } \Rightarrow \text{ Array<M>}
Scalar Function Constants:
\[
\begin{align*}
\text{DPS version of } \tilde{F} \text{ Scalar Constants (See Figure 2).} \\
\text{stgOff} & : \text{Ref, Shp } \Rightarrow \text{Ref} \\
\text{vecShp} & : \text{Card, Shp } \Rightarrow \text{(Card } \ast \text{ Shp)} \\
\text{fst} & : \text{(Card } \ast \text{ Shp) } \Rightarrow \text{Card} \\
\text{snd} & : \text{(Card } \ast \text{ Shp) } \Rightarrow \text{Shp} \\
\text{bytes} & : \text{Shp } \Rightarrow \text{Card}
\end{align*}
\]
Syntactic Sugars:
\[
\begin{align*}
t_0, [t_1] \{r\} & = \text{get } r \ t_0 \ t_1 \\
\text{length } t & = \text{length } \bullet t \\
(t_0, t_1) & = \text{vecShp } t_0 \ t_1
\end{align*}
\]
for all binary ops \(\text{bop} : e_1 \text{ bop } e_2 = \text{bop } \bullet e_1 \ e_2\)
Fig. 5. The type system and built-in constants of DPS-\(\tilde{F}\)
\[
\begin{align*}
D[e]_r & = t \\
(D-App) & \quad D[e_0 \ \ldots \ x_k]\_r = (D[e_0]\_\bullet) \ r \ x_1 \ \ldots \ x_k \ x_k^{\text{shp}} \ \ldots \ x_k^{\text{shp}} \\
(D-Abs) & \quad D[\lambda x_1 \ \ldots \ x_k, e_1]\_\bullet = \lambda r_2 \ x_1 \ \ldots \ x_k \ x_k^{\text{shp}} \ \ldots \ x_k^{\text{shp}}. \ D[e_1]\_r_2 \\
(D-VarScalar) & \quad D[x]\_\bullet = x \\
(D-VarVector) & \quad D[\text{let } x = e_1 \ \text{ in } e_2]\_r = \text{let } x^{\text{shp}} = S[e_1] \ \text{ in } \\
& \quad \ \text{ alloc (bytes } x^{\text{shp}}) (\lambda r_2, \text{let } x = D[e_1]\_r_2 \ \text{ in } D[e_2]\_r) \\
(D-If) & \quad D[\text{if } e_1 \ \text{ then } e_2 \ \text{ else } e_3]\_r = \text{if } D[e_1]\_\bullet \ \text{ then } D[e_2]\_r \ \text{ else } D[e_3]\_r \\
(DT-Fun) & \quad D_T[T_1, \ldots, T_k \Rightarrow \text{ M}] = \text{Ref, } D_T[T_1], \ldots, D_T[T_k], S_T[T_1], \ldots, S_T[T_k] \Rightarrow D_T[M] \\
(DT-Mat) & \quad D_T[M] = M \\
(DT-Bool) & \quad D_T[\text{Bool}] = \text{Bool} \\
(DT-Card) & \quad D_T[\text{Card}] = \text{Card}
\end{align*}
\]
Fig. 6. Translation from \(\tilde{F}\) to DPS-\(\tilde{F}\)
\[
\begin{align*}
f = \lambda x \text{vec1 vec2.} \\
\text{let tmp = } \text{vectorAdd vec1 vec2 in } \\
\text{vectorNorm tmp}
\end{align*}
\]
Figure 6 shows the translation from \(\tilde{F}\) to DPS-\(\tilde{F}\), where \(D[e]_r\) is the translation of a \(\tilde{F}\) expression \(e\) into a DPS-\(\tilde{F}\) expression that stores \(e\)'s value in memory \(r\). Rule (D-Let) is a good place to start. It uses \text{alloc} to allocate enough space for the value of \(e_1\), the right hand side of the let — but how much space
is that? We use an auxiliary translation $\mathcal{S}[e_1]$ to translate $e_1$ to an expression that computes $e_1$’s \textit{shape} rather than its \textit{value}. The shape of an array expression specifies the cardinality of each dimension. We will discuss why we need shape (what goes wrong with just using bytes) and the shape translation in Section 3.3. This shape is bound to $x^{\text{shp}}$, and used in the argument to \texttt{alloc}. The freshly-allocated storage $r_2$ is used as the destination for translating the right hand side $e_1$, while the original destination $r$ is used as the destination for the body $e_2$.
In general, every variable $x$ in $\tilde{F}$ becomes a \texttt{pair} of variables $x$ (for $x$’s value) and $x^{\text{shp}}$ (for $x$’s shape) in DPS-$\tilde{F}$. You can see this same phenomenon in rules (D-App) and (D-Abs), which deal with lambdas and application: we turn each lambda-bound argument $x$ into \texttt{two} arguments $x$ and $x^{\text{shp}}$.
Finally, in rule (D-App) the destination memory $r$ for the context is passed on to the function being called, as its additional first argument; and in (D-Abs) each lambda gets an additional first argument, which is used as the destination when translating the body of the lambda. Figure 6 also gives a translation of an $\tilde{F}$ type $T$ to the corresponding DPS-$\tilde{F}$ type $D$.
For variables there are two cases. In rule (D-VarScalar) a scalar variable is translated to itself, while in rule (D-VarVector) we must copy the array into the designated result storage using the \texttt{copy} function. The \texttt{copy} function copies the array elements as well as the header information into the given storage.
### 3.3 Shape translation
As we have seen, rule (D-Let) relies on the \textit{shape translation} of the right hand side. This translation is given in Figure 7. If $e$ has type $T$, then $\mathcal{S}[e]$ is an expression of type $\mathcal{S}(T)$ that gives the shape of $e$. This expression can always be evaluated without allocation.
A \textit{shape} is an expression of type $\text{Shp}$ (Figure 4), whose values are given by $P$ in that Figure. There are three cases to consider
First, a scalar value has shape $\circ$ (rules (S-ExpNum), (S-ExpBool)).
Second, when $e$ is an array, $\mathcal{S}[e]$ gives the shape of the array as a nested tuple, such as $(3, (4, \circ))$ for a 3-vector of 4-vectors. So the “shape” of an array specifies the cardinality of each dimension.
Finally, when $e$ is a function, $\mathcal{S}[e]$ is a function that takes the shapes of its arguments and returns the shape of its result. You can see this directly in rule (S-App): to compute the shape of (the result of) a call, apply the shape-translation of the function to the shapes of the arguments. This is possible because $\tilde{F}$ programs do not allow the programmer to write a function whose result size depends on the contents of its input array.
What is the shape-translation of a function $f$? Remembering that every in-scope variable $f$ has become a pair of variables one for the value and one for the shape, we can simply use the latter, $f^{\text{shp}}$, as we see in rule (S-Var).
For arrays, could the shape be simply the number of bytes required for the array, rather than a nested tuple? No. Consider the following function, which returns the first row of its argument matrix:
\[
\text{firstRow} = \lambda m: \text{Array<Array<Double>>}. m[0]
\]
The shape translation of firstRow, namely firstRow$^{\text{shp}}$, is given the shape of $m$, and must produce the shape of $m$’s first row. It cannot do that given only the number of bytes in $m$; it must know how many rows and columns it has. But by defining shapes as a nested tuple, it becomes easy: see rule (S-Get).
The shape of the result of the iteration construct (\texttt{reduce}) requires the shape of the state expression to remain the same across iterations. Otherwise the compiler produce an error, as it is shown in rule (S-Reduce).
Using Destination-Passing Style to Compile a Functional Language into Efficient Low-Level Code
\[ S[e] = s \]
\[(S\text{-App}) \quad S[e_0 \ e_1 \ldots \ e_k] = S[e_0] \ S[e_1] \ldots S[e_k] \]
\[(S\text{-Abs}) \quad S[\lambda x_1 : T_1, \ldots, x_k : T_k. e] = \lambda x_1^{shp}. S_T[T_1], \ldots, x_k^{shp}. S_T[T_k]. S[e] \]
\[(S\text{-Var}) \quad S[x] = x^{shp} \]
\[(S\text{-Let}) \quad S[\text{let } x = e_1 \text{ in } e_2] = \text{let } x^{shp} = S[e_1] \text{ in } S[e_2] \]
\[(S\text{-If}) \quad S[\text{if } e_1 \text{ then } e_2 \text{ else } e_3] = \begin{cases} S[e_2] & S[e_2] \equiv S[e_3] \\ \text{Compilation Error!} & S[e_2] \not\equiv S[e_3] \end{cases} \]
\[(S\text{-ExpNum}) \quad e : \text{Num} \vdash S[e] = o \]
\[(S\text{-ExpBool}) \quad e : \text{Bool} \vdash S[e] = o \]
\[(S\text{-AddCard}) \quad S[N] = N \]
\[(S\text{-MulCard}) \quad S[e_0 + e_1] = S[e_0] + S[e_1] \]
\[(S\text{-ExpNum}) \quad e : \text{Num} \vdash S[e] = o \]
\[(S\text{-ExpBool}) \quad e : \text{Bool} \vdash S[e] = o \]
\[(S\text{-AddCard}) \quad S[N] = N \]
\[(S\text{-MulCard}) \quad S[e_0 * e_1] = S[e_0] * S[e_1] \]
\[(S\text{-Build}) \quad S[\text{build } e_0 e_1] = (S[e_0], (S[e_1] \circ)) \]
\[(S\text{-Get}) \quad S[e_0[e_1]] = \text{snd } S[e_0] \]
\[(S\text{-Length}) \quad S[\text{length } e_0] = \text{fst } S[e_0] \]
\[(S\text{-Reduce}) \quad S[\text{reduce } e_1 e_2 e_3] = \begin{cases} S[e_2] & \forall n. S[e_1 e_2 n] \not\equiv S[e_2] \\ \text{Compilation Error!} & \text{otherwise} \end{cases} \]
\[ S_T[T] = S \]
\[(ST\text{-Fun}) \quad S_T[T_1, T_2, \ldots, T_k \Rightarrow M] = S_T[T_1], S_T[T_2], \ldots, S_T[T_k] \Rightarrow S_T[M] \]
\[(ST\text{-Num}) \quad S_T[\text{Num}] = \text{Card} \]
\[(ST\text{-Bool}) \quad S_T[\text{Bool}] = \text{Card} \]
\[(ST\text{-Card}) \quad S_T[\text{Card}] = \text{Card} \]
\[(ST\text{-Vector}) \quad S_T[\text{Array}<M>] = (\text{Card}, S_T[M]) \]
Fig. 7. Shape Translation of \( \tilde{F} \)
The other rules are straightforward. The key point is this: by translating every in-scope variable, including functions, into a pair of variables, we can give a compositional account of shape translation, even in a higher order language.
### 3.4 An example
Using this translation, the running example at the beginning of Section 3.2 is translated as follows:
Amir Shaikhha, Andrew Fitzgibbon, Simon Peyton-Jones, and Dimitrios Vytiniotis
\[
\begin{align*}
\text{alloc} \circ (\lambda r. t_1) & \rightarrow t_1[r \mapsto \bullet] & \text{Empty Allocation} \\
\text{alloc} t_1 (\lambda r_1. ) & \rightarrow \text{alloc} (t_1 + c \cdot t_2) (\lambda r_1. ) \\
\text{alloc} t_2 (\lambda r_2. ) & \rightarrow \text{let } r_2 = \text{stg} \text{Off } t_1 ext{ in } t_3 & \text{Allocation Merging} \\
\text{alloc} t_1 (\lambda r. t_2) & \rightarrow \text{let } r_2 = \text{stg} \text{Off } t_1 ext{ in } t_3 & \text{Allocation Hoisting} \\
\lambda x. \text{alloc} t_1 (\lambda r. t_2) & \rightarrow \text{alloc} t_1 (\lambda r. \lambda x. t_2) \text{ if } x \notin \text{FV}(t_1) & \text{Allocation Hoisting} \\
\text{bytes} \circ & \rightarrow \circ \\
\text{bytes} (\circ, \circ) & \rightarrow \circ \\
\text{bytes} (N, \circ) & \rightarrow \text{NUM_BYTES} \ast c \cdot N + \text{HDR_BYTES} \\
\text{bytes} (N, s) & \rightarrow (\text{bytes } s) \ast c \cdot N + \text{HDR_BYTES} \\
\end{align*}
\]
Fig. 8. Simplification rules of DPS-\(\sim\)F
\[
f = \lambda r_0 \ \text{vec}1 \ \text{vec}2 \ \text{vec}1^{\text{shp}} \ \text{vec}2^{\text{shp}}. \\
\text{let } \text{tmp}^{\text{shp}} = \text{vectorAdd}^{\text{shp}} \ \text{vec}1^{\text{shp}} \ \text{vec}2^{\text{shp}} \ \text{in} \\
\text{alloc} (\text{bytes } \text{tmp}^{\text{shp}}) (\lambda r_1. ) \\
\text{let } \text{tmp} = \\
\text{vectorAdd } r_1 \ \text{vec}1 \ \text{vec}2 \\
\text{vec}1^{\text{shp}} \ \text{vec}2^{\text{shp}} \ \text{in} \\
\text{vectorNorm } r_0 \ \text{tmp} \ \text{tmp}^{\text{shp}} \\
\)
The shape translations of some \(\sim\)F functions from Figure 3 are as follows:
\[
\begin{align*}
\text{let } \text{vectorRange}^{\text{shp}} &= \lambda n^{\text{shp}}. (n^{\text{shp}}, (\lambda i^{\text{shp}}. \circ) \circ) \\
\text{let } \text{vectorMap}^{\text{shp}} &= \lambda v1^{\text{shp}} v2^{\text{shp}} p^{\text{shp}}. \\
\text{let } \text{vectorAdd}^{\text{shp}} &= \lambda v1^{\text{shp}} v2^{\text{shp}}. \\
\text{let } \text{vectorNorm}^{\text{shp}} &= \lambda v^{\text{shp}}. \circ
\end{align*}
\]
3.5 Simplification
As is apparent from the examples in the previous section, code generated by the translation has many optimisation opportunities. This optimisation, or simplification, is applied in three stages: 1) \(\sim\)F expressions, 2) translated Shape-\(\sim\)F expressions, and 3) translated DPS-\(\sim\)F expressions. In the first stage, \(\sim\)F expressions are simplified to exploit fusion opportunities that remove intermediate arrays entirely. Furthermore, other compiler transformations such as constant folding, dead-code elimination, and common-subexpression elimination are also applied at this stage.
In the second stage, the Shape-\(\sim\)F expressions are simplified. The simplification process for these expressions mainly involves partial evaluation. By inlining all shape functions, and performing \(\beta\)-reduction and constant folding, shapes can often be computed at compile time, or at least can be greatly simplified. For example, the shape translations presented in Section 3.3 after performing simplification are as follows:
let vectorRange^{shp} = \lambda n^{shp}. (n^{shp}, \circ)
let vectorMap2^{shp} = \lambda v1^{shp} v2^{shp} f^{shp}. \map v1^{shp}
let vectorAdd^{shp} = \lambda v1^{shp} v2^{shp}. v1^{shp}
let vectorNorm^{shp} = \lambda v^{shp}. \circ
The final stage involves both partially evaluating the shape expressions in DPS-\tilde{F} and simplifying
the storage accesses in the DPS-\tilde{F} expressions. Figure 8 demonstrates simplification rules for storage
accesses. The first two rules remove empty allocations and merge consecutive allocations, respectively.
The third rule removes a dead allocation, i.e. an allocation for which its storage is never used. The fourth
rule hoists an allocation outside an abstraction whenever possible. The benefit of this rule is amplified
more in the case that the storage is allocated inside a loop (\build or \reduce). Note that none of these
transformation rules are available in \tilde{F}, due to the lack of explicit storage facilities.
After applying the presented simplification process, our working example is translated to the following
program:
f = \lambda r_0 vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl.
alloc (\text{bytes} vecl) (\lambda r_1.
let tmp =
vectorAdd r_1 vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl vecl
vectorNorm r_0 tmp vecl
)
In this program, there is no shape computation at runtime.
3.6 Properties of shape translation
The target language of shape translation is a subset of DPS-\tilde{F} called Shape-\tilde{F}. The syntax of the subset
is given in Figure 9. It includes nested pairs, of statically-known depth, to represent shapes, but it does
not include vectors. That provides an important property for Shape-\tilde{F} as follows:
THEOREM 1. All expressions resulting from shape translation, do not require any heap memory allocation.
Proof. All the non-shape expressions have either scalar or function type. As it is shown in Figure 7 all
scalar type expressions are translated into zero cardinality (\circ), which can be stack-allocated. On the
other hand, the function type expressions can also be stack-allocated. This is because we avoid partial
application. Hence, the captured environment in a closure does not escape its scope. Hence, the closure
environment can be stack allocated. Finally, the last case consists of expressions which are the result of
shape translation for vector expressions. As we know the number of dimensions of the original vector
expressions, the translated expressions are tuples with a known-depth, which can be easily allocated on
stack.
Next, we show the properties of our translation algorithm. First, let us investigate the impact of shape
translation on \tilde{F} types. For array types, we need to represent the shape in terms of the shape of each
element of the array, and the cardinality of the array. We encode this information as a tuple. For scalar
type and cardinality type expressions, the shape is a cardinality expression. This is captured in the
following theorem:
1:12 Amir Shaikhha, Andrew Fitzgibbon, Simon Peyton-Jones, and Dimitrios Vytiniotis
\[
\begin{align*}
s & ::= \text{sym} \mid \text{let } x = s \text{ in } s \\
P & ::= \circ \mid N \mid (N, P) \\
c & ::= \text{vecShp} \mid \text{fst} \mid \text{snd} \mid +^c \mid \ast^c \\
S & ::= S \Rightarrow \text{Shp} \mid \text{Shp} \\
\text{Shp} & ::= \text{Card} \mid (\text{Card} \ast \text{Shp})
\end{align*}
\]
Fig. 9. Shape-$\tilde{F}$ syntax, which is a subset of the syntax of DPS-$\tilde{F}$ presented in Figure 4.
**Theorem 2.** If the expression $e$ in $\tilde{F}$ has the type $T$, then $S[e]$ has type $S_T[T]$.
*Proof.* Can be proved by induction on the translation rules from $\tilde{F}$ to Shape-$\tilde{F}$.
In order to have a simpler shape translation algorithm as well as better guarantees about the expressions resulting from shape translation, two important restrictions are applied on $\tilde{F}$ programs.
(1) The accumulating function which is used in the $\text{reduce}$ operator should preserve the shape of the initial value. Otherwise, converting the result shape into a closed-form polynomial expression requires solving a recurrence relation.
(2) The shape of both branches of a conditional should be the same.
These two restrictions simplify the shape translation as is shown in Figure 7.
**Theorem 3.** All expressions resulting from shape translation require linear computation time with respect to the size of terms in the original $\tilde{F}$ program.
*Proof.* This can be proved in two steps. First, translating a $\tilde{F}$ expression into its shape expression, leads to an expression with smaller size. This can be proved by induction on translation rules. Second, the run time of a shape expression is linear in terms of its size. An important case is the $\text{reduce}$ construct, which by applying the mentioned restrictions, we ensured their shape can be computed without any need for recursion.
Finally, we believe that our translation is correct based on our successful implementation. However, we leave a formal semantics definition and the proof of correctness of the transformation as future work.
4 IMPLEMENTATION
4.1 $\tilde{F}$ Language
We implemented $\tilde{F}$ as a subset of F#. Hence $\tilde{F}$ programs are normal F# programs. Furthermore, the built-in constants (presented in Figure 2) are defined as a library in F# and all library functions (presented in Figure 3) are implemented using these built-in constants. If a given expression is in the subset supported by $\tilde{F}$, the compiler accepts it.
For implementing the transformations presented in the previous sections, instead of modifying the F# compiler, we use F# quotations [31]. Note that there is no need for the user to use F# quotations in order to implement a $\tilde{F}$ program. The F# quotations are only used by the compiler developer in order to implement transformation passes.
Although $\tilde{F}$ expressions are F# expressions, it is not possible to express memory management constructs used by DPS-$\tilde{F}$ expressions using the F# runtime. Hence, after translating $\tilde{F}$ expressions to DPS-$\tilde{F}$, we compile down the result program into a programming language which provides memory management facilities, such as C. The generated C code can either be used as kernels by other C programs, or invoked in F# as a native function using inter-operatorability facilities provided by Common Language Runtime (CLR).
Next, we discuss why we choose C and how the C code generation works.
4.2 C Code Generation
There are many programming languages which provide manual memory management. Among them we are interested in the ones which give us full control on the runtime environment, while still being easy to debug. Hence, low-level imperative languages such as C and C++ are better candidates than LLVM mainly because of debugging purposes.
One of the main advantages of DPS-$\tilde{F}$ is that we can generate idiomatic C from it. More specifically, the generated C code is similar to a handwritten C program. This is because, we can manage the memory in a stack fashion. The translation from DPS-$\tilde{F}$ programs into C code is quite straightforward.
As our DPS encoded programs are using the memory in a stack fashion, the memory could be managed more efficiently. More specifically, we first allocate a specific amount of buffer in the beginning. Then, instead of using the standard `malloc` function, we bump-allocate from our already allocated buffer. Hence, in most cases allocating memory is only a pointer arithmetic operation to advance the pointer to the last allocated element of the buffer. In the cases that the user needs more than the amount which is allocated in the buffer, we need to double the size of the buffer. Furthermore, memory deallocation is also very efficient in this scheme. Instead of invoking the `free` function, we need to only decrement the pointer to the last allocated storage.
We compile lambdas by performing closure conversion. Because DPS-$\tilde{F}$ does not allow partial application, the environment captured by a closure can be stack allocated.
As mentioned in Section 2, polymorphism is not allowed except for some built-in constructs in the language (e.g. `build` and `reduce`). Hence, all the usages of these constructs are monomorphic, and the C code generator knows exactly which code to generate for them. Furthermore, the C code generator does not need to perform the closure conversion for the lambdas passed to the built-in constructs. Instead, it can generate an efficient for-loop in place. As an example, the generated C code for a running sum function of $\tilde{F}$ is:
```c
double vector_sum(vector v) {
double sum = 0;
for (index idx = 0; idx < v->length; idx++) {
sum = sum + v->elements[idx];
}
return sum;
}
```
Finally, for the `alloc` construct in DPS-$\tilde{F}$, the generated C code consists of the following three parts. First, a memory allocation statement is generated which allocates the given amount of storage. Second, the corresponding body of code which uses the allocated storage is generated. Finally, a memory deallocation statement is generated which frees the allocated storage. The generated C code for our working example is:
```c
double f(storage r0, vector vec1_dps, vector vec2_dps,
vec_shape vec1_shp, vec_shape vec2_shp) {
storage r1 = malloc(vector_bytes(vec1_shp));
vector tmp_dps =
vector_add_dps(r1, vec1_dps, vec2_dps, vec1_shp, vec2_shp);
double result = vector_norm_dps(r0, tmp_dps, vec1_shp);
free(r1);
return result;
}
```
We use our own implementation of `malloc` and `free` for bump allocation.
5 EXPERIMENTAL RESULTS
For the experimental evaluation, we use an iMac machine equipped with an Intel Core i5 CPU running at 2.7GHz, 32GB of DDR3 RAM at 1333Mhz. The operating system is OS X 10.10.5. We use Mono 4.6.1 as the runtime system for F# programs and CLang 700.1.81 for compiling the C++ code and generated C.
Throughout this section, we compare the performance and memory consumption of the following alternatives:
- **F#:** Using the array operations (e.g. map) provided in the standard library of F# to implement vector operations.
- **CL:** Leaky C code, which is the generated C code from F, using malloc to allocate vectors, never calling free.
- **CG:** C code using Boehm GC, which is the generated C code from F, using GC_malloc of Boehm GC to allocate vectors.
- **CLF:** CL + Fused Loops, performs deforestation and loop fusion before CL.
- **D:** DPS C code using system-provided malloc/free, translates F programs into DPS-F before generating C code. Hence, the generated C code frees all allocated vectors. In this variant, the malloc and free functions are used for memory management.
- **DF:** D + Fused Loops, which is similar to the previous one, but performs deforestation before translating to DPS-F.
- **DFB:** DF + Buffer Optimizations, which performs the buffer optimizations described in Section 3.5 (such as allocation hoisting and merging) on DPS-F expressions.
Using Destination-Passing Style to Compile a Functional Language into Efficient Low-Level Code
- **DFBS:** DFB using stack allocator, same as DFB, but using bump allocation for memory management, as previously discussed in Section 4.2. This is the best C code we generate from $\tilde{F}$.
- **C++:** Idiomatic C++, which uses an handwritten C++ vector library, depending on C++14 move construction and copy elision for performance, with explicit programmer indication of fixed-size (known at compile time) vectors, permitting stack allocation.
- **E++:** Eigen C++, which uses the Eigen [13] library which is implemented using C++ expression templates to effect loop fusion and copy elision. Also uses explicit sizing for fixed-size vectors.
First, we investigate the behavior of several variants of generated C code for two micro benchmarks. More specifically we see how DPS improves both performance and memory consumption in comparison with an F# version. The behavior of the generated DPS code is very similar to manually handwritten C++ code and the Eigen library.
Then, we demonstrate the benefit of using DPS for some real-life computer vision and machine learning workloads motivated in [28]. Based on the results for these workloads, we argue that using DPS is a great choice for generating C code for numerical workloads, such as computer vision algorithms, running on embedded devices with a limited amount of memory available.
### 5.1 Micro Benchmarks
Figure 10 shows the experimental results for micro benchmarks, one adding three vectors, the second using vector cross product.
```
add3: vectorAdd(vectorAdd(vec1, vec2), vec3)
```
in which all the vectors contain 100 elements. This program is run one million times in a loop, and timing results are shown in Figure 10a. In order to highlight the performance differences, the figure uses a logarithmic scale on its Y-axis. Based on these results we make the following observations. First, we see that all C and C++ programs are outperforming the F# program, except the one which uses the Boehm GC. This shows the overhead of garbage collection in the F# runtime environment and Boehm GC. Second, loop fusion has a positive impact on performance. This is because this program involves creating an intermediate vector (the one resulting from addition of vec1 and vec2). Third, the generated DPS C code which uses buffer optimizations (DFB) is faster than the one without this optimization (DF). This is mainly because the result vector is allocated only once for DFB whereas it is allocated once per iteration in DF. Finally, there is no clear advantage for C++ versions. This is mainly due to the fact that the vectors have sizes not known at compile time, hence the elements are not stack allocated. The Eigen version partially compensates this limitation by using vectorized operations, making the performance comparable to our best generated DPS C code.
The peak memory consumption of this program for different approaches is shown in Figure 10b. This measurement is performed by running this program by varying number of iterations. Both axes use logarithmic scales to better demonstrate the memory consumption difference. As expected, F# uses almost the same amount of memory over the time, due to garbage collection. However, the runtime system sets the initial amount to 15MB by default. Also unsurprisingly, leaky C uses memory linear in the number of iterations, albeit from a lower base. The fused version of leaky C (CLF) decreases the consumed memory by a constant factor. Finally, DPS C, and C++ use a constant amount of space which is one order of magnitude less than the one used by the F# program, and half the amount used by the generated C code using Boehm GC.
This micro-benchmark is 1 million runs in which the two vectors contain 3 elements. Timing results are in Figure 10c. We see that the F# program is faster than the generated leaky C code, perhaps because garbage collection is invoked less frequently than in add3. Overall, in both cases, the performance of F# program and generated leaky C code is very similar. In this example, loop fusion does not have any impact on performance, as the program contains only one operator. As in the previous benchmark, all variants of generated DPS C code have a similar performance and outperform the generated leaky C code and the one using Boehm GC, for the same reasons. Finally, both handwritten and Eigen C++ programs have a similar performance to our generated C programs. For the case of this program, both C++ libraries provide fixed-sized vectors, which results in stack allocating the elements of the two vectors. This has a positive impact on performance. Furthermore, as there is no SIMD version of the cross operator, we do not observe a visible advantage for Eigen.
Finally, we discuss the memory consumption experiments of the second program, which is shown in Figure 10d. This experiment leads to the same observation as the one for the first program. However, as the second program does not involve creating any intermediate vector, loop fusion does not improve the peak memory consumption.
The presented micro benchmarks show that our DPS generated C code improves both performance and memory consumption by an order of magnitude in comparison with an equivalent F# program. Also, the generated DPS C code promptly deallocates memory which makes the peak memory consumption
cross. : vectorCross(vec1, vec2)
constant over the time, as opposed to a linear increase of memory consumption of the generated leaky C code. In addition, by using bump allocators the generated DPS C code can improve performance as well. Finally, we see that the generated DPS C code behaves very similarly to both handwritten and Eigen C++ programs.
Next, we investigate the performance and memory consumption of real-life workloads.
5.2 Computer Vision and Machine Learning Workloads
**Bundle Adjustment.** [35] is a computer vision problem which has many applications. In this problem, the goal is to optimize several parameters in order to have an accurate estimate of the projection of a 3D point by a camera. This is achieved by minimizing an objective function representing the reprojection error. This objective function is passed to a nonlinear minimizer as a function handle, and is typically called many times during the minimization.
One of the core parts of this objective function is the `project` function which is responsible for finding the projected coordinates of a 3D point by a camera, including a model of the radial distortion of the lens. The F implementation of this method is given in Figure 12.
Figure 11a shows the runtime of different approaches after running `project` ten million times. First, the F# program performs similarly to the leaky generated C code and the C code using Boehm GC. Second, loop fusion improves speed fivefold. Third, the generated DPS C code is slower than the generated leaky C code, mainly due to costs associated with intermediate deallocations. However, this overhead is reduced by using bump allocation and performing loop fusion and buffer optimizations. Finally, we observe that the best version of our generated DPS C code marginally outperforms both C++ versions.
The peak memory consumption of different approaches for Bundle Adjustment is shown in Figure 11b. First, the F# program uses three orders of magnitude less memory in comparison with the generated leaky C code, which remains linear in the number of calls. This improvement is four orders of magnitude in the case of the generated C code using Boehm GC. Second, loop fusion improves the memory consumption of the leaky C code by an order of magnitude, due to removing several intermediate vectors. Finally, all generated DPS C variants as well as C++ versions consume the same amount of memory. The peak memory consumption of is an order of magnitude better than the F# baseline.
**The Gaussian Mixture Model.** is a workhorse machine learning tool, used for computer vision applications such as image background modelling and image denoising, as well as semi-supervised learning.
In GMM, loop fusion can successfully remove all intermediate vectors. Hence, there is no difference between CL and CLF, or between DS and DSF, in terms of both performance and peak memory consumption as can be observed in Figure 11c and Figure 11d. Both C++ libraries do not support the loop fusion needed for GMM. Hence, they behave three orders of magnitude worse than our fused and DPS generated C code.
Due to the cost for performing memory allocation (and deallocation for DPS) at each iteration, the F# program, the leaky C code, and the generated DPS C code exhibit a worse performance than the fused and stack allocated versions. Furthermore, as the leaky C code does not deallocate the intermediate vectors, it monotonically increases the consumed memory.
**Hand tracking.** is a computer vision/computer graphics workload [32] that includes matrix-matrix multiplies, and numerous combinations of fixed- and variable-sized vectors and matrices. Figure 11e shows performance results of running one of the main functions of hand-tracking for 1 million times. As in the cross micro-benchmark we see no advantage for loop fusion, because in this function the intermediate vectors have multiple consumers. Similar to previous cases generating DPS C code improves runtime performance, which is improved even more by using bump allocation and performing loop fusion and
let radialDistort = \( \lambda \) (radical: Vector) (proj: Vector).
let rsq = vectorNorm proj
let L = 1.0 + radical.[0] * rsq + radical.[1] * rsq * rsq
vectorSMul proj L
let rodriguesRotate = \( \lambda \) (rotation: Vector) (x: Vector).
let sqtheta = vectorNorm rotation
if sqtheta != 0. then
let theta = sqrt sqtheta
let thetaInv = 1.0 / theta
let w = vectorSMul rotation thetaInv
let wCrossX = vectorCross w x
let tmp = (vectorDot w x) * (1.0 - (cos theta))
let v1 = vectorSMul x (cos theta)
let v2 = vectorSMul wCrossX (sin theta)
vectorAdd (vectorAdd v1 v2) (vectorSMul w tmp)
else
vectorAdd x (vectorCross rotation x)
let project = \( \lambda \) (cam: Vector) (x: Vector).
let rotation = vectorSlice cam 0 2
let center = vectorSlice cam 3 5
let radical = vectorSlice cam 9 10
let Xcam = rodriguesRotate rotation (vectorSub x center)
let distorted =
radialDistort radical (
vectorSMul (
vectorSlice Xcam 0 1
) (1.0/Xcam.[2]))
vectorAdd (vectorSlice cam 7 8) (vectorSMul distorted cam.[6])
buffer optimizations. However, in this case the idiomatic C++ version outperforms the generated DPS C code. Figure 11f shows that DPS generated programs consume an order of magnitude less memory than the F# baseline, equal to the C++ versions.
6 RELATED WORK
6.1 Programming Languages without GC
Functional programming languages without using garbage collection dates back to Linear Lisp [1]. However, most functional languages (dating back to the Lisp around 1959) use garbage collection for managing memory.
Region-based memory management was first introduced in ML [33] and then in an extended version of C, called Cyclone [12], as an alternative or complementary technique to in order to remove the need...
for runtime garbage collection. This is achieved by allocating memory regions based on the liveness of objects. This approach improves both performance and memory consumption in many cases. However, in many cases the size of the regions is not known, whereas in our approach the size of each storage location is computed using the shape expressions. Also, in practice there are cases in which one needs to combine this technique with garbage collection [14], as well as cases in which the performance is still not satisfying [2, 34]. Furthermore, the complexity of region inference, hinders the maintenance of the compiler, in addition to the overhead it causes for compilation time.
Safe [23, 24] suggests a simpler region inference algorithm by restricting the language to a first-order functional language. Also, linear regions [8] relax the stack discipline restriction on region-based memory management. This is because of certain usecases, which use unbounded amount of memory due to recursion. A Haskell implementation of this approach is given in [20]. The restricted form of recursion allowed by $\bar{F}$ means that we never face similar issues. Hence, we choose to always follow the stack discipline for memory management.
### 6.2 Estimation of Memory Consumption
One can use type systems for estimating memory consumption. Hofmann and Jost [17] enrich the type system with certain annotations and uses linear programming for the heap consumption inference. Another approach is to use sized types [37] for the same purpose.
Size slicing [16] uses a technique similar to ours for inferring the shape of arrays in the Futhark programming language. However, in $\bar{F}$ we guarantee that shape inference is simplified and is based only on size computation, whereas in their case, they rely on compiler optimizations for its simplification and in some cases it can fall back to inefficient approaches which in the worst case could be as expensive as evaluating the original expression [17]. The FISh programming language [19] also makes shape information explicit in programs, and resolves the shapes at compilation time by using partial evaluation.
Clinger [4] explores different space efficiency classes. Based on the proposed formalism he defines formally what it means for a language to properly handle tail recursion. Next, we see related work on optimizing tail recursive calls.
### 6.3 Optimizing Tail Calls
Destination-passing style was originally introduced in [21], then was encoded functionally in [22] by using linear types [36, 40]. Walker and Morrisett [41] use extensions to linear type systems to support aliasing which is avoided in vanilla linear type systems. The idea of destination-passing style has many similarities to tail-recursion modulo cons [9, 38].
### 6.4 Array Programming Languages
APL [18] can be considered as the first array programming language. Futhark [15, 16] and SAC [11] are functional array programming languages. One interesting property of such languages is the support for fusion, which is achieved in $\bar{F}$ by certain rewrite rules. However, as this topic is out of the scope of this paper, we leave more discussion for the future work.
There are many domain-specific languages (DSLs) for numerical workloads such as Opt [6], Halide [26], Diderot [3], and OptiML [29]. All these DSLs generate parallel code from their high-level programs. Furthermore, Halide [26] exploits the memory hierarchy by making tiling and scheduling decisions, similar to Spiral [25] and LGen [27]. Although both parallelism and improving use of a memory hierarchy are orthogonal concepts to translation into DPS, they are still interesting directions for $\bar{F}$.
REFERENCES
Using Destination-Passing Style to Compile a Functional Language into Efficient Low-Level Code
|
{"Source-Url": "https://www.microsoft.com/en-us/research/wp-content/uploads/2016/11/dps-submitted.pdf", "len_cl100k_base": 13993, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 68329, "total-output-tokens": 17478, "length": "2e13", "weborganizer": {"__label__adult": 0.00036406517028808594, "__label__art_design": 0.0003218650817871094, "__label__crime_law": 0.00024890899658203125, "__label__education_jobs": 0.0003619194030761719, "__label__entertainment": 5.7697296142578125e-05, "__label__fashion_beauty": 0.0001513957977294922, "__label__finance_business": 0.00018143653869628904, "__label__food_dining": 0.0003745555877685547, "__label__games": 0.00044655799865722656, "__label__hardware": 0.0010290145874023438, "__label__health": 0.0004572868347167969, "__label__history": 0.00021409988403320312, "__label__home_hobbies": 7.808208465576172e-05, "__label__industrial": 0.0003466606140136719, "__label__literature": 0.0002052783966064453, "__label__politics": 0.0002682209014892578, "__label__religion": 0.00047135353088378906, "__label__science_tech": 0.00989532470703125, "__label__social_life": 6.371736526489258e-05, "__label__software": 0.00348663330078125, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.00025844573974609375, "__label__transportation": 0.0004642009735107422, "__label__travel": 0.0001933574676513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59707, 0.02107]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59707, 0.66196]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59707, 0.83946]], "google_gemma-3-12b-it_contains_pii": [[0, 3012, false], [3012, 5383, null], [5383, 9180, null], [9180, 9792, null], [9792, 13366, null], [13366, 14966, null], [14966, 18221, null], [18221, 22244, null], [22244, 24567, null], [24567, 27762, null], [27762, 30843, null], [30843, 34333, null], [34333, 37587, null], [37587, 38987, null], [38987, 42738, null], [42738, 44453, null], [44453, 48509, null], [48509, 50292, null], [50292, 54082, null], [54082, 56577, null], [56577, 59707, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3012, true], [3012, 5383, null], [5383, 9180, null], [9180, 9792, null], [9792, 13366, null], [13366, 14966, null], [14966, 18221, null], [18221, 22244, null], [22244, 24567, null], [24567, 27762, null], [27762, 30843, null], [30843, 34333, null], [34333, 37587, null], [37587, 38987, null], [38987, 42738, null], [42738, 44453, null], [44453, 48509, null], [48509, 50292, null], [50292, 54082, null], [54082, 56577, null], [56577, 59707, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59707, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59707, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59707, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59707, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59707, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59707, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59707, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59707, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59707, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59707, null]], "pdf_page_numbers": [[0, 3012, 1], [3012, 5383, 2], [5383, 9180, 3], [9180, 9792, 4], [9792, 13366, 5], [13366, 14966, 6], [14966, 18221, 7], [18221, 22244, 8], [22244, 24567, 9], [24567, 27762, 10], [27762, 30843, 11], [30843, 34333, 12], [34333, 37587, 13], [37587, 38987, 14], [38987, 42738, 15], [42738, 44453, 16], [44453, 48509, 17], [48509, 50292, 18], [50292, 54082, 19], [54082, 56577, 20], [56577, 59707, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59707, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
45dfa9a3f9a7d282db9bf46513fda56f59c68892
|
Summary
This application note describes how the Vivado® High-Level Synthesis (HLS) tool enables higher productivity in protocol processing designs by providing abstractions in critical areas. This simplifies designs and makes them less error-prone. While the basics of implementing protocol processing designs using Vivado HLS are fairly straightforward, there are some subtle aspects that warrant a detailed explanation. This application note includes:
- **Basic Concepts and Code Examples** for building a packet processing system in Vivado HLS.
- **Advanced Concepts and Code Examples**.
- Included with this application note is an example design that demonstrates how a protocol processing sub system can be built using Vivado HLS. This document includes information about how to use the example design, along with Example Design File Location and Details and an Example Design Description. The example design implements a basic set of networking modules that implement Address Resolution Protocol (ARP) and ping functionality. The required Vivado HLS files are provided, as well as a Vivado Design Suite project that you can use to implement the design on a Xilinx® VC709 development board.
Introduction
Protocol Processing
Protocol processing on different levels is present in any modern communication system because any exchange of information requires the use of a communication protocol. The protocol typically contains packets. The packets must be created by the sender and reassembled at the receiver, while ensuring adherence to protocol specifications. This makes protocol processing ubiquitous. Consequently, protocol processing, and implementing protocol processing functionality efficiently, is important for FPGA design.
This application note explains how to address key challenges encountered when processing protocols using Vivado HLS.
Raising the Level of Abstraction and Related Benefits
Vivado HLS raises the level of abstraction in system design by:
- Using C/C++ as a programming language and leveraging the high-level constructs it offers.
- Providing additional data primitives that allow you to easily use basic hardware building blocks (bit vectors, queues, etc.)
These characteristics allow you to use Vivado HLS to address common protocol system design challenges more easily than when using RTL, with the following benefits:
**System assembly**
Vivado HLS modules are treated as functions, with the function definition being equivalent to an RTL description of the module and a function call being equivalent to a module.
instantiation. This simplifies the structural code describing the system by reducing the amount of code that has to be written.
**Simple FIFO/memory access**
Accessing a memory or a FIFO in Vivado HLS is done in one of two ways: through methods of an appropriate object (for example, the read and write methods of a stream object) or by accessing a standard C array, which synthesis then implements as block RAM or distributed RAM. Vivado HLS takes care of the additional signaling, synchronization, and/or addressing as required.
**Abstraction of control flow**
Vivado HLS provides a set of flow-control aware interfaces ranging from simple FIFO interfaces to AXI4-Stream. In all of these interfaces, you access the data without having to check for back pressure or data availability. Vivado HLS schedules execution appropriately to take care of all contingencies, while ensuring correct execution.
**Word realignment**
The abstraction of flow control enables you to use Vivado HLS to perform core protocol processing tasks, such as word realignment easily. Data access aided by the abstraction of flow control eliminates the need for error prone reading and writing from FIFO/memories, and thus allows you to write simpler code.
**Easy architecture exploration**
In Vivado HLS, you can insert pragma directives in the code to communicate the features of your design to Vivado HLS. These can range from fundamental issues, such as the pipelining of a module, to more mundane ones, such as the depth of a FIFO queue. In any case, pragma directives provide you with the ability to explore a wide range of architectural alternatives without requiring changes to the implementation code itself.
**C and C/RTL simulation**
Vivado HLS designs can be verified using a two-step simulation process.
1. C simulation, in which the C/C++ is compiled and executed like a normal C/C++ program. While this simulation is not cycle-accurate, it mirrors the functionality of the auto-generated RTL code very well. This enables functional verification of the design by using C/C++ test benches at C/C++ execution speeds, thus enabling very long simulations, which are not possible in RTL.
2. Verification with C/RTL co-simulation. Vivado HLS automatically generates an RTL test bench from the C/C++ test bench, then implements and executes an RTL simulation that can be used to check the accuracy of the implementation.
**Understanding Directives**
Because the C++ code used in Vivado HLS is compact in nature, you can leverage its features to realize development time and productivity benefits as well as improvements in code maintainability and readability. Furthermore, Vivado HLS allows you to maintain control over the architecture and its features. To take full advantage of its capabilities, correct understanding and use of Vivado HLS directives is fundamental.
SDNet
HLS occupies an intermediate slot in the hierarchy of Xilinx-provided packet processing solutions. It is complemented by Vivado Design Suite SDNet [Ref 1], which uses:
- A domain-specific language to offer a simpler, if more constrained, way of expressing protocol processing systems
- RTL, which allows for the implementation of a considerably wider breadth of systems that Vivado HLS is not able to express (for example, systems requiring detailed clock management using DCMs or differential signaling).
You can, however, use HLS to implement the vast majority of protocol processing solutions efficiently, without compromising the quality of results or design flexibility.
Basic Concepts and Code Examples
This section provides guidelines and code examples for building a simple protocol processing system with Vivado HLS.
When starting a new design, the most basic tasks to be accomplished are:
- Determining the design structure. An example is provided in the section Setting Up a Simple System.
- Implementing the design in Vivado HLS. An example is provided in the section Implementing a State Machine with Vivado HLS.
Setting Up a Simple System
In Vivado HLS, the basic building block of a system is a C/C++ function. Building a system consisting of modules and submodules essentially means that a top-level function calls lower level functions. Figure 1 illustrates a simple three-stage pipeline example to introduce the basic concepts for system building in Vivado HLS. Protocol processing is typically performed in pipelined designs, with each stage addressing a specific part of the processing.
Code Example 1 - Creating a Simple System in Vivado HLS
```
1 void topLevelModule(stream<axiWord> &inData, stream<axiWord> &outData) {
2 #pragma VHLS dataflow interval=1
3
4 #pragma HLS INTERFACE port=inData axis
5 #pragma HLS INTERFACE port=outData axis
6
7 static stream<ap_uint<64> > modOne2modTwo
8 static stream<ap_uint<64> > modTwo2modThree;
9
10 #pragma HLS STREAM variable = modOne2modTwo depth = 4;
11 #pragma HLS STREAM variable = modTwo2modThree depth = 4;
12
13 moduleOne(inData, modOne2modTwo);
14 moduleTwo(modOne2modTwo, modTwo2modThree);
15 moduleThree(modTwo2modThree, outData);
16 }
```
Code Example 1 - Creating a Simple System in Vivado HLS creates the top module function that calls the other sub-functions. The top module function uses two parameters, both of which are objects of class stream, which is one of the template classes provided by the Vivado HLS libraries. A stream is a Vivado HLS modeling construct that represents an interface over which data is to be exchanged in a streaming manner. A stream can be implemented as a FIFO queue or shift register, as detailed in the Vivado Design Suite User Guide: High-Level Synthesis (UG902) [Ref 2]. A stream is a template class that can be used with any C++ construct. In this case, a defined data structure (struct) called axiWord is used. This is shown in Code Example 2 - Definition of a C++ struct for Use in a Stream Interface.
Code Example 2 - Definition of a C++ struct for Use in a Stream Interface
```cpp
struct axiWord {
ap_uint<64> data;
ap_uint<8> strb;
ap_uint<1> last;
};
```
This struct defines part of the fields for an AXI4-Stream interface. This kind of interface is automatically supported in Vivado HLS and can be specified using a pragma statement. Pragmas are directives to the Vivado synthesis tool that help guide the tool to reach the required results. The pragmas in lines 4 and 5 of Code Example 1 - Creating a Simple System in Vivado HLS tell Vivado HLS that both parameters (essentially the input and output ports of the top module) are to use AXI4-Stream interfaces and provide a name for the resulting interface. The AXI4-Stream interface includes two mandatory signals, the valid and ready signals, which were not included in the declared struct. This is because the Vivado HLS AXI4 interface manages these signals internally, which means that they are transparent to user logic. As mentioned earlier, Vivado HLS completely abstracts flow control when using AXI4-Stream interfaces.
An interface does not have to use AXI4-Stream. Vivado HLS provides a rich set of bus interfaces, which are listed in the Vivado Design Suite User Guide: High-Level Synthesis (UG902) [Ref 2]. AXI4-Stream is used here as an example of a popular, standardized interface that can be used for packet processing.
The next task in implementing your design is to ensure that our three modules are connected to each other. This is also accomplished through streams, but in this case, the streams are internal to the top module. Lines 7 and 8 of Code Example 1 - Creating a Simple System in Vivado HLS declare two streams for this purpose.
These streams:
- Make use of another Vivado HLS construct, ap_uint.
- The ap_unit construct consists of bit-accurate unsigned integers and can be thought of and manipulated as a bit.
Because it is a template class, the width of this array must also be specified. In this case, 64 bits are used, matching the width of the data members of the input and output interfaces of the top module.
Are declared as static variables. A static variable maintains its value over multiple function calls. The top-level module (and thus all its submodules) is called once in every clock cycle when executed as a sequential C/C++ program, so any variables that must maintain their values from the one cycle to the next intact, must be declared as static.
As mentioned, a stream interface can be implemented as a FIFO queue or as a memory, which means that it can also have a specific depth to act as a buffer for data traffic. The depth of the FIFO queue can be set for each stream using the stream pragma, as shown in lines 10 and 11 of Code Example 1 - Creating a Simple System in Vivado HLS. For typical feed-forward designs, buffering might not be required. Omitting the pragma causes Vivado HLS to automatically use a FIFO with a depth of one, which allows Vivado HLS to efficiently implement small FIFOs using flip-flops, and thus save block RAMs and LUTs.
Creating Pipelined Designs
The last pragma to discuss is perhaps the most important one. The dataflow pragma in line 2 of Code Example 1 - Creating a Simple System in Vivado HLS instructs Vivado HLS to attempt to schedule the execution of all the sub-functions in this function in parallel. It is important to note that the effect of the dataflow pragma does not propagate down the hierarchy of the design. Thus, if a lower level function contains sub-functions whose execution has to be scheduled in parallel, then the dataflow pragma must be specified in that function separately. The parameter interval defines the Initiation Interval (II) for this module. II defines the throughput of the design by telling Vivado HLS how often this module has to be able to process a new input data word. This does not preclude the module being internally pipelined and having a latency greater than 1. An II = 2 means that the module has 2 cycles to complete the processing of a data word before having to read in a new one. This can allow Vivado HLS to simplify the resulting RTL for a module. That being said, in a typical protocol processing application the design has to be able to process one data word in each clock cycle, thus from now on an II = 1 is used.
Note: The parameter interval is a deprecated feature and is subject to change.
Finally, you call the functions themselves. In Vivado HLS this also corresponds with the instantiations of the modules. The parameters that are passed to each module essentially define the module communication port. In this case, you create a chain of the three modules by connecting the input to the first module, then the first module to the second over stream modOne2modTwo, and so on.
Implementing a State Machine with Vivado HLS
Protocol processing is inherently stateful. You are required to read in successive packet words arriving onto a bus over many clock cycles and decide on further operations according to some field of the packet. The common way to handle this type of processing is by using a state machine, which iterates over the packet and performs the necessary processing. Code Example 3 - Finite State Machine using Vivado HLS shows a simple state machine, which either drops or forwards a packet, depending on an input from a previous stage.
The function receives three arguments: the input packet data over the inData stream, a one-bit flag that shows if a packet is valid or not over the validBuffer stream, and the output packet data stream, called outData.
Note: Parameters in the Vivado HLS functions are passed by reference. This is necessary when using Vivado HLS streams, which are complex classes. Simpler data types like the ap_uint can also be passed by value.
The pipeline pragma in line 2 of Code Example 3 - Finite State Machine using Vivado HLS instructs Vivado HLS to pipeline this function to achieve an initiation interval of 1 (II = 1),
meaning that it is able to process one new input data word every clock cycle. Vivado HLS examines the design and determines how many pipeline stages it needs to introduce to the design to meet the required scheduling restrictions. To describe this pragma a bit further, assume you are performing a read-modify-write operation. If it is not pipelined, an II=1 cannot be met because scheduling dictates that the read occur in clock cycle T and the write in clock cycle T+1. This is the default behavior in Vivado HLS without the pipeline pragma. Inserting the pragma causes Vivado HLS to schedule the access in a way that the target II value can be reached.
**Caution:** If accesses are interdependent, reaching the target II value might be impossible. Additional explanation on this topic can be found in Advanced Concepts and Code Examples.
### Code Example 3 - Finite State Machine using Vivado HLS
```c
void dropper(stream<axiWord>& inData, stream<ap_uint<1> >& validBuffer, stream<axiWord>& outData) {
#pragma HLS pipeline II=1 enable_flush
static enum dState {D_IDLE = 0, D_STREAM, D_DROP} dropState;
axiWord currWord = {0, 0, 0, 0};
switch(dropState) {
case D_IDLE:
if (!validBuffer.empty() && !inData.empty()) {
ap_uint<1> valid = validBuffer.read();
inData.read(currWord);
if (valid) {
outData.write(currWord);
dropState = D_STREAM;
}
} else
dropState = D_DROP;
break;
case D_STREAM:
if (!inData.empty()) {
inData.read(currWord);
outData.write(currWord);
if (currWord.last)
dropState = D_IDLE;
}
break;
case D_DROP:
if (!inData.empty()) {
inData.read(currWord);
if (currWord.last)
dropState = D_IDLE;
}
break;
}
}
```
Line 4 declares a static enumeration variable that expresses state in this FSM. Using an enumeration is optional but allows for more legible code because states can be given proper names. However, any integer or ap_uint variable can also be used with similar results. Line 5 declares a variable of type axiWord, in which packet data to be read from the input is stored.
The switch statement in line 7 represents the actual state machine. Using a switch is recommended but not mandatory. An if-else decision tree would also perform the same functionality. The switch statement allows the tool to enumerate all the states and optimize the resulting state machine RTL code efficiently.
Execution starts at the \texttt{D_IDLE} state where the FSM reads from the two input streams in lines 10 and 11. These two lines demonstrate both uses of the read method of the stream object. Both methods read from the specified stream and store the result into the given variable. This method performs a blocking read, which means that if the method call is not successfully executed, the execution of the remaining code in this function call is blocked. This happens when trying to read from an empty stream.
You can use the method described above to describe an explicit state machine in HLS. In many cases (such as when partially unrolling a loop) HLS also creates a state machine to orchestrate the control flow required.
\textbf{Stream Splitting and Merging}
In the following code example, two pragmas are used at the top of the function to indicate how Vivado HLS must handle this function. The \texttt{inline} pragma instructs Vivado HLS not to dissolve and absorb this function into its top level. Using the \texttt{dataflow} pragma in a function causes it to respect the boundaries of any functions called from it. In this case, therefore, the \texttt{inline} pragma is not required.
However, this is only valid for the immediate lower level from the one in which \texttt{dataflow} was used. If a sub-function contains more nested sub-functions itself, these have to be inlined (or not) manually. If no inline directive is used, VHLS determines whether or not to inline, based on the size and complexity of each function. For example, if you have a three layer design and specify \texttt{dataflow} on layer 0 (the lowest one), the boundaries of the functions in layer 1 are preserved automatically because of the \texttt{dataflow} pragma. This does not, however, apply to the boundaries of the functions in layer 2.
The ability to forward packets to different modules according to some field in the protocol stack, and then to recombine these streams before transmission, is a critical functionality in protocol processing. Vivado HLS allows for the use of high-level constructs to facilitate this, as Code Example 4 - Simple Stream Merge illustrates for the case of a stream merging.
\textbf{Code Example 4 - Simple Stream Merge}
```c
void merge(stream<axiWord> inData[NUM_MERGE_STREAMS], stream<axiWord> &outData) {
#pragma HLS INLINE off
#pragma HLS pipeline II=1 enable_flush
static enum mState{M_IDLE = 0, M_STREAM} mergeState;
static ap_uint<LOG2CEIL_NUM_MERGE_STREAMS> rrCtr = 0;
static ap_uint<LOG2CEIL_NUM_MERGE_STREAMS> streamSource = 0;
axiWord inputWord = {0, 0, 0, 0};
switch(mergeState) {
case M_IDLE:
bool streamEmpty[NUM_MERGE_STREAMS];
#pragma HLS ARRAY_PARTITION variable=streamEmpty complete
for (uint8_t i=0;i<NUM_MERGE_STREAMS;++i)
streamEmpty[i] = inData[i].empty();
for (uint8_t i=0;i<NUM_MERGE_STREAMS;++i) {
uint8_t tempCtr = streamSource + 1 + i;
if (tempCtr >= NUM_MERGE_STREAMS)
tempCtr -= NUM_MERGE_STREAMS;
if(!streamEmpty[tempCtr]) {
streamSource = tempCtr;
inputWord = inData[streamSource].read();
outData.write(inputWord);
if (inputWord.last == 0)
mergeState = M_STREAM;
}
}
break;
```
In this example, a module merge is used, which has a stream array as input \((\text{inData})\) and a single stream \((\text{outData})\) as output. The purpose of this module is to read from the input streams in a fair manner and output the read data to the output stream. The module is implemented as a two-state FSM, which is described using the same constructs that were previously introduced. The focus of the example is on how the merge functionality over the multiple streams is implemented.
The first state in the FSM ensures fairness when choosing the input stream. This is done using a round-robin algorithm to go over the queues. The algorithm starts looking for new data from the queue after the one that was accessed previously. Thus, for example, if in a four queue system, queue 2 was accessed in clock cycle T, in cycle T+1 the search for data to output starts with queue 3 and then goes on to 0, 1, and, finally, 2. The code in lines 17-20 implements the round-robin algorithm. The constant \(\text{NUM\_MERGE\_STREAMS}\) specifies the number of streams that are to be merged. Subsequently, line 20 tests the current stream, which is identified by the \(\text{tempCntr}\) variable for content. If it is not empty:
- The current stream identified by \(\text{tempCntr}\) is set to be the active stream (line 21).
- Data is read from that stream (line 22).
- If the data word currently read in the input is not the last (checked in line 24), the state machine moves to the \(\text{M\_STREAM}\) state, where it outputs the remaining data word from the selected stream identified by \(\text{tempCntr}\).
- When the last data word is processed, the FSM reverts to state \(\text{M\_IDLE}\), where it repeats the previous process.
Splitting an incoming stream would be a similar process. Data words coming from one stream would be routed appropriately to a stream array.
### Extracting and Realigning Fields
Extracting and realigning fields is one of the most fundamental operations in packet processing. Because packets typically arrive in a module through a bus over multiple clock cycles, it is common that fields of interest are not aligned properly in the data word in which they arrive and/or these fields spawn multiple data words.
To process the fields, they must be extricated from the data stream, buffered, and realigned for processing.
#### Code Example 5 - Source MAC Address Extraction
```cpp
if (!inData.empty()) {
inData.read(currWord);
switch(wordCount) {
case 0:
MAC_DST = currWord.data.range(47, 0);
MAC_SRC.range(15, 0) = currWord.data.range(63, 48);
break;
case 1:
```
In the previous section, the description of a simple three-stage pipeline using Vivado HLS was introduced. However, typical packet processing systems might encompass many modules distributed into several layers of hierarchy.
**Creating Systems with Multiple Levels of Hierarchy**
Figure 2 shows an example of such a system. The first level of hierarchy consists of two modules, one of which includes three submodules of its own. The top level module looks like the one described in the section above, Setting Up a Simple System. However, the lower level module containing the three submodules uses the `INLINE` pragma to dissolve this function and raise its submodules to the top level, as shown in Code Example 6 - Intermediate Module in Vivado HLS.

**Code Example 6 - Intermediate Module in Vivado HLS**
```
1 void module2(stream<axiWord> &inData, stream<axiWord> &outData) {
2 #pragma HLS INLINE
3 ........
```
With the inlining of the function, the system resembles Figure 3 after Vivado HLS synthesis. This allows Vivado HLS to create a data flow architecture out of the modules correctly,
pipelining and executing all of them concurrently. Module and signal names are maintained as they were after the inlining of the function.
**Using High-Level Language Constructs**
One major advantage of Vivado HLS is that it allows you to use high-level language constructs to express complex objects, thus raising the level of abstraction considerably over traditional RTL design. One example of this is the description of a small look-up table.
**Code Example 7 - CAM Class Declaration** uses a class object to create a table that stores and retrieves the ARP protocol data. The class has one private member, which is an array of `noOfArpTableEntries` number of entries of `arpTableEntry` type. This type is a `struct`, which consists of the MAC address, the corresponding IP address, and a bit that indicates whether this entry contains valid data or not.
**Code Example 7 - CAM Class Declaration**
```cpp
1 class cam {
2 private:
3 arpTableEntry filterEntries[noOfArpTableEntries];
4 public:
5 cam();
6 bool write(arpTableEntry writeEntry);
7 bool clear(ap_uint<32> clearAddress);
8 arpTableEntry compare(ap_uint<32> searchAddress);
9};
```
The class also includes four methods that operate on this table:
- **The Write Method**
- **The Clear Method**
- **The Compare Method**
- **The Constructor Method** (shown in Code Example 12 - CAM Class Constructor with Pragma Directive to Partition the Array)
**The Write Method**
The write method, illustrated in **Code Example 8 - Write Method for the CAM Class**, takes a new entry as a parameter and stores it in an empty location in the table. Initially, it goes through all the entries in the table and selects the first one that does not contain valid data. This process involves a `for` loop that goes through all array elements.
For the design to reach the target $\tau_I=1$, the `for` loop must be unrolled completely. Vivado HLS does this automatically if the loop is in a pipelined region (a part of the code in which the pipeline pragma was applied). To unroll the `for` loops throughout an entire function, apply the pipeline pragma to it.
Alternatively, if the function in which this method is used is pipelined, and the method is inlined into that function upon Vivado synthesis, the method essentially inherits the pipeline property.
applied to the function through the pragma. This is the approach used in the example design accompanying this application note. In other cases, the unrolling behavior is determined by Vivado HLS, depending on various criteria (presence of variable bounds, number of iterations, etc.). You can explicitly use a pragma unroll directive to instruct Vivado HLS on how to treat the loop. This can be done with the loop unroll pragma. The method returns TRUE if the entry was stored successfully and FALSE when no empty entry was found.
Code Example 8 - Write Method for the CAM Class
```cpp
bool cam::write(arpTableEntry writeEntry) {
for (uint8_t i=0; i<noOfArpTableEntries;++i) {
if (this->filterEntries[i].valid == 0) {
this->filterEntries[i] = writeEntry;
return true;
}
}
return false;
}
```
The Clear Method
The clear method, shown in Code Example 9 - Clear Method for the CAM Class, allows for the deletion of the entry, which contains the IP address provided as a parameter. The implementation is similar to the write method, with a for loop going through all the entries and comparing the IP addresses of the valid ones with the one provided, and then erasing the first matching entry in the table. Again, TRUE is returned upon success and FALSE if no entry to delete was found.
Code Example 9 - Clear Method for the CAM Class
```cpp
bool cam::clear(ap_uint<32> clearAddress) {
for (uint8_t i=0; i<noOfArpTableEntries;++i){
if (this->filterEntries[i].valid == 1 && clearAddress == this->filterEntries[i].ipAddress) {
this->filterEntries[i].valid = 0;
return true;
}
}
return false;
}
```
The Compare Method
The final method is the compare method, shown in Code Example 10 - Compare Method for the CAM Class. It implements the actual look-up functionality. In this case, an IP address is provided, for which the corresponding MAC address has to be returned. This is accomplished by going through all the entries in the table with a for loop and searching for a valid entry with the same IP address. This entry is then returned in its entirety. An invalid entry is returned if nothing is found.
Code Example 10 - Compare Method for the CAM Class
```cpp
arpTableEntry cam::compare(ap_uint<32> searchAddress) {
for (uint8_t i=0; i<noOfArpTableEntries;++i){
if (this->filterEntries[i].valid == 1 && searchAddress == this->filterEntries[i].ipAddress) {
return this->filterEntries[i];
}
}
arpTableEntry temp = {0, 0, 0};
return temp;
}
```
This description demonstrates how Vivado HLS can be used to leverage high-level programming constructs and describe packet processing systems in a software-like manner. This is not possible in RTL.
### Ensuring Design Throughput
The previous section introduced the use of a class to describe a self-contained look-up object, which is subsequently synthesized and integrated into modules. While this solution is functionally correct, it does not necessarily ensure that the design reaches the desired throughput target. In a typical protocol processing design, packets arrive over a bus over multiple clock cycles. A maximum of one new data word per clock cycle might have to be processed. For example, in a 10 Gb/s design, processing packets at line rate requires that the design can consume one 64-bit data word every clock cycle at 156 MHz. Widening the bus results in a reduced frequency requirement or in increased headroom while processing the data (thus an II = 2 might be possible). For the purposes of this discussion, assume that the target II is always 1. Similar methodologies can be followed to design systems with different II targets.
To attain the target II goal, it is important to ensure that Vivado HLS can access the required streams and variables in a timely fashion. A straightforward example of a code snippet that violates this principle is shown below in Code Example 11 - Where the II = 1 Constraint Cannot be Met. This example modifies the code from Code Example 5 - Source MAC Address Extraction and attempts to complete the realignment of the MAC source address in one state (state 0).
#### Code Example 11 - Where the II = 1 Constraint Cannot be Met
```c
switch(wordCount) {
case 0:
if (!inData.empty()) {
inData.read(currWord);
MAC_DST = currWord.data.range(47, 0);
MAC_SRC.range(15, 0) = currWord.data.range(63, 48);
outData.write(currWord);
}
if (!inData.empty()) {
inData.read(currWord);
MAC_SRC.range(47,16) = currWord.data.range(31, 0);
outData.write(currWord);
}
break;
default:
if (inData.read_nb(currWord))
outData.write(currWord);
break;
}
```
Synthesizing this code in Vivado HLS results in an II = 2. The Vivado synthesis output at the console window contains the following message:
```
@W [SCHED-68] Unable to enforce a carried dependency constraint (II = 1, distance = 1)
between fifo read on port 'inData_V_data_V'
(hlsProtocolProcessing/sources/iiExample.cpp:8) and fifo read on port
'inData_V_data_V' (hlsProtocolProcessing/sources/iiExample.cpp:3).
@I [SCHED-61] Pipelining result: Target II: 1, Final II: 2, Depth: 3.
```
This message informs you that a carried dependency between the two reads inhibits the synthesis process from scheduling the accesses to achieve an II = 1. The issue: in the code lines indicated, the module attempts to access the stream inData twice in the same clock cycle. Because this is impossible in a stream (which represents a single port of a memory/FIFO construct), synthesis fails to meet the set constraints. Similar limitations apply when accessing
static variables used to maintain information between states. You must therefore use caution when scheduling accesses in states to reach the desired throughput goal.
More complex access issues can arise in complex designs, such as the look-up table, which was introduced in the previous section. If you synthesize the code provided there as-is, Vivado HLS cannot meet a target $T = 1$. This is because of access congestion due to the limited number of memory ports. By default, Vivado HLS uses a block RAM to store the table entries, and a block RAM contains two access ports. To parse all the entries of an eight-entry table, the module requires at least four clock cycles, which means it cannot attain the target $T = 1$. To address this issue, you must instruct Vivado HLS to partition the array in which the table entries are stored. Partitioning the array essentially breaks it down to multiple, smaller arrays and allows the use of more memories and, therefore, more access ports. In the most extreme case, which is the one used in Code Example 12 - CAM Class Constructor with Pragma Directive to Partition the Array, you can partition an array completely, which essentially creates a register array in which the individual elements are stored.
**Code Example 12 - CAM Class Constructor with Pragma Directive to Partition the Array**
```c++
1 cam::cam(){
2 #pragma VHLS array_partition variable=filterEntries complete
3 for (uint8_t i=0;i<noOfArpTableEntries;++i)
4 this->filterEntries[i].valid = 0;
5 }
```
This code snippet shows the constructor for the table class described in the previous section. The constructor sets all entries to invalid and also includes the pragma that partitions the array. Alternatively, the pragma could be specified in the function in which the object of the class was instantiated. In that case, the pragma would apply only to the specific object; used as shown in the example, however, it applies to all objects of this class.
### Example Design File Location and Details
You can download the design files for this application note from the following location:
The following table provides details about the example design.
**Table 1: Example Design Details**
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Developer Name</td>
<td>Xilinx</td>
</tr>
</tbody>
</table>
| Target Devices | - HLS Design: All Xilinx devices
- Vivado Design Suite Design: Virtex®-7 XC7VX690T-2FFG1761C |
| Source code provided? | Y |
| Source code format (if provided) | C++, Verilog, VHDL |
| Design uses code or IP from existing reference design, application note, 3rd party or Vivado Design Suite software | - Ten Gigabit Ethernet PCS/PMA (10GBASE-R/KR) v4.1
- Ten Gigabit Ethernet MAC v13.0
- FIFO Generator v11.0
- AXI4-Stream Register Slice v1.1 |
| FIFO Generator v11.0 | |
| AXI4-Stream Register Slice v1.1 | |
Example Design Description
To better describe the concepts introduced in this application note, a simple system implementing basic ping and ARP functionality is provided. In a typical network processing subsystem, this system would:
- Reside between the Ethernet MAC and a user application.
- Respond to ping requests as well as provide support for MAC address resolution, while looping back all other packets. (For use in a real system, the loopback module would have to be replaced with your application.)
Figure 4 shows the structure of the example design system. It consists of a parser module that identifies the packet type for incoming packets and forwards the packets to one of the following modules:
- ARP Server: The ARP server responds to ARP requests directed at this system and handles MAC address resolution requests instigated by your application.
- Internet Control Message Protocol (ICMP) Server: The ICMP server replies to ping requests sent to this system.
- Loopback: The loopback sends packets back through the MAC and into the network without change.
At the system output, the merge module recombines traffic streams from the three other modules and produces one output stream to send to the network.
### Table 1: Example Design Details (Cont’d)
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Simulation</strong></td>
<td></td>
</tr>
<tr>
<td>Functional simulation performed</td>
<td>Y</td>
</tr>
<tr>
<td>Timing simulation performed?</td>
<td>Y</td>
</tr>
<tr>
<td>Testbench provided for functional and timing simulation?</td>
<td>Y</td>
</tr>
<tr>
<td>Testbench format</td>
<td>C++</td>
</tr>
<tr>
<td>Simulator software and version</td>
<td>Any supported by HLS (the Vivado simulator is the default)</td>
</tr>
<tr>
<td>SPICE/IBIS simulations</td>
<td>N</td>
</tr>
<tr>
<td>Implementation software tool(s) and version</td>
<td>Vivado HLS 2014.1 and later, Vivado 2013.4 and later</td>
</tr>
<tr>
<td><strong>Hardware Verification</strong></td>
<td></td>
</tr>
<tr>
<td>Hardware verified?</td>
<td>Y</td>
</tr>
<tr>
<td>Platform used for verification</td>
<td>Xilinx VC709 Development Board</td>
</tr>
</tbody>
</table>
The description of the system follows the same principles introduced in the first part of this application note. The top level function `VHLSExample` consists of five sub-functions, each corresponding to one of the submodules shown in Figure 4. All modules are connected using streams. The external ports of the system are configured to use AXI4-Stream interfaces. The typical set of pragmas is applied. In this case, the depth property of the `STREAM` pragma is set to 1, but larger values might be necessary to address transient effects in the system. This is not correct. The external port streams use AXI4S. The internal ones use the ap_fifo I/F, which is what HLS streams typically use by default.
Increasing the value for the depth of a stream naturally increases resource usage as well. This increase happens step-wise because the basic building blocks used for a stream (either LUTs and flip-flops or block RAMs) can accommodate a specific number of entries before having to add more resources to store additional entries. To illustrate this, Table 2 shows the resources used for various stream depth values. For values 1 and 4, the amount of resources used is identical because no additional resources had to be used to fit the extra entries; whereas, when the number of entries is increased to 8, resource consumption also increases commensurately.
**Table 2: Resource Use for Different Stream Depth Values**
<table>
<thead>
<tr>
<th>Stream Depth</th>
<th>LUTs</th>
<th>Flip-Flops</th>
<th>Block RAMs</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2564</td>
<td>210</td>
<td>0</td>
</tr>
<tr>
<td>4</td>
<td>2564</td>
<td>210</td>
<td>0</td>
</tr>
<tr>
<td>8</td>
<td>2456</td>
<td>1592</td>
<td>40</td>
</tr>
</tbody>
</table>
Moving down one hierarchy level, the Parser and the ICMP server consist of multiple submodules joined together in a pipelined fashion. Figure 5 illustrates this for the Parser. There are three submodules:
- Ethernet Detection: Checks the EtherType field in the Ethernet frame header and determines the lower layer protocol. It then forwards the packet either to the ARP server or to the Length Adjust module.
• Length Adjust: Readjusts the packet by stripping away any padding added by the Ethernet layer to meet the minimum packet size requirements, making the packet length in the IP header equal to the actual packet length.
• ICMP Detection: Uses the protocol field in the IPv4 header to detect any ICMP packet and forward it to the ICMP server or Loopback module.

*Figure 5: Parser Block Diagram*
All the submodules are state machines that adhere to the FSM description methodology (introduced in the section Implementing a State Machine with Vivado HLS, page 5).
The ICMP server receives ICMP packets from the parser, processes them, and creates ping replies for valid packets. Its structure is shown in *Figure 6*. It consists of three stages arranged in a pipeline manner with an additional signal that jumps over the dropper and forwards the IP checksum directly to the IP checksum module. The three modules are:
• Create Reply: Parses the ICMP header, determines if the packet is a valid ICMP packet, creates a reply, and calculates the IP checksum for the newly created reply packet. This checksum is then forwarded to the Insert Checksum module. The packet status (valid or invalid) is signaled to the Dropper over the validBuffer.
• The Dropper allows valid packets to stream through it, while filtering invalid packets and removing them from the packet stream.
• The Insert Checksum module receives the checksum for the newly created ICMP reply packet over the cr2checksum stream and reinserts it into valid packets, which it receives from the Dropper.

*Figure 6: ICMP Server Block Diagram*
Both the Parser and the ICMP server are inlined into the top level functions and are dissolved upon Vivado synthesis. This allows HLS to better optimize the scheduling of the design in the top level. This results in a final system architecture that resembles *Figure 7*. The dashed lines demarcate the location of the parser and ICMP modules in the source code. In the synthesized...
system the mid-level modules have been removed and their submodules brought to the top level. Stream names have been omitted for the sake of clarity.
\[\text{Figure 7: System Resulting from the Vivado Synthesis}\]
Table 3 and Table 4 contain excerpts from an expanded Vivado synthesis report that shows the submodules of the VHLSExample module. The complete Vivado HLS report is explained in more detail in the subsequent sections.
\[\text{Table 3: Post Vivado Synthesis Sub-module List}\]
<table>
<thead>
<tr>
<th>Instance</th>
<th>Module</th>
<th>Latency</th>
<th>Interval</th>
<th>Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>grp_detect_mac_protocol_fu_799</td>
<td>detect_mac_protocol</td>
<td>1</td>
<td>1</td>
<td>Function</td>
</tr>
<tr>
<td>grp_cut_length_fu_839</td>
<td>cut_length</td>
<td>1</td>
<td>1</td>
<td>Function</td>
</tr>
<tr>
<td>grp_detect_ip_protocol_fu_731</td>
<td>detect_ip_protocol</td>
<td>1</td>
<td>1</td>
<td>Function</td>
</tr>
<tr>
<td>grp_arp_server_fu_547</td>
<td>arp_server</td>
<td>2</td>
<td>2</td>
<td>Function</td>
</tr>
<tr>
<td>grp_createReply_fu_689</td>
<td>createReply</td>
<td>1</td>
<td>1</td>
<td>Function</td>
</tr>
<tr>
<td>grp_dropper_fu_863</td>
<td>dropper</td>
<td>1</td>
<td>1</td>
<td>Function</td>
</tr>
<tr>
<td>grp_insertChecksum_fu_775</td>
<td>insertChecksum</td>
<td>1</td>
<td>1</td>
<td>Function</td>
</tr>
<tr>
<td>grp_loopback_fu_887</td>
<td>loopback</td>
<td>1</td>
<td>1</td>
<td>Function</td>
</tr>
<tr>
<td>grp_merge_fu_649</td>
<td>merge</td>
<td>1</td>
<td>1</td>
<td>Function</td>
</tr>
</tbody>
</table>
The final module of interest in the design is the ARP server, which implements two functions:
- Receives ARP requests over the network, determines whether or not these requests are destined for this node, and if so, sends a reply to the requesting node containing the MAC address of this node. The MAC address is hard coded in the source code.
- Receives external requests to resolve the MAC of an IP address. This is received over the queryIP stream shown in Figure 8. The ARP server then looks up the IP address in an internal table, which it maintains. If a match is found, the ARP server reads the entry from the table and returns the value for the MAC address over the returnMAC stream. If no match is found, the module sends an ARP Request for this IP address to the network broadcast address and waits for a reply. If nothing is received, the operation times out and the module returns to its idle state. If a reply is received, the MAC address corresponding to the requested IP address is stored in the internal table for future use and then sent back over the returnMAC stream.
The queryIP and returnMAC streams use the normal ap_fifo interface, which is the native interface of an Vivado HLS stream. It resembles a typical FIFO interface, with read, write, full and empty signals.
**Table 4: Post Vivado Synthesis Sub-Module Resource Utilization List**
<table>
<thead>
<tr>
<th>Instance</th>
<th>Module</th>
<th>BRAM_18K</th>
<th>DSP48E</th>
<th>Flip-Flops</th>
<th>LUTs</th>
</tr>
</thead>
<tbody>
<tr>
<td>arp_server_U0</td>
<td>arp_server</td>
<td>0</td>
<td>0</td>
<td>1355</td>
<td>1718</td>
</tr>
<tr>
<td>createReply_U0</td>
<td>createReply</td>
<td>0</td>
<td>0</td>
<td>553</td>
<td>555</td>
</tr>
<tr>
<td>dropper_U0</td>
<td>dropper</td>
<td>0</td>
<td>0</td>
<td>213</td>
<td>12</td>
</tr>
<tr>
<td>ethernetDetection_U0</td>
<td>ethernetDetection</td>
<td>0</td>
<td>0</td>
<td>433</td>
<td>95</td>
</tr>
<tr>
<td>icmpDetection_U0</td>
<td>icmpDetection</td>
<td>0</td>
<td>0</td>
<td>225</td>
<td>841</td>
</tr>
<tr>
<td>insertChecksum_U0</td>
<td>insertChecksum</td>
<td>0</td>
<td>0</td>
<td>369</td>
<td>120</td>
</tr>
<tr>
<td>lengthAdjust_U0</td>
<td>lengthAdjust</td>
<td>0</td>
<td>0</td>
<td>236</td>
<td>67</td>
</tr>
<tr>
<td>loopback_U0</td>
<td>loopback</td>
<td>0</td>
<td>0</td>
<td>205</td>
<td>4</td>
</tr>
<tr>
<td>merge_U0</td>
<td>merge</td>
<td>0</td>
<td>0</td>
<td>414</td>
<td>673</td>
</tr>
<tr>
<td><strong>Total</strong></td>
<td></td>
<td>9</td>
<td>0</td>
<td>4003</td>
<td>4085</td>
</tr>
</tbody>
</table>
The queryIP and returnMAC streams use the normal ap_fifo interface, which is the native interface of an Vivado HLS stream. It resembles a typical FIFO interface, with read, write, full and empty signals.
**Figure 8: ARP Server Block Diagram**
Example Design Contents and Descriptions
The example design accompanying this application note consists of:
- A Vivado HLS project, that contains all the modules described previously. This is described in the section Protocol Processing Example: Vivado HLS Project, below.
- A Vivado Design Suite project that integrates the Vivado HLS modules with the necessary companion modules (for example, an Ethernet MAC core) to produce a functional example design targeting the VC709 evaluation board. This is described in the sections Using Vivado Design Suite to Implement the Design, page 21 and Testing the Example Design on the VC709 Evaluation Board, page 22
Protocol Processing Example: Vivado HLS Project
The example design provided contains six C++ source code files, one for each module and one accompanying header file for each. An additional header file (globals.hpp) contains declarations pertinent to the entire project. Finally, a test bench file (VHLSExample_tb.cpp) is provided to facilitate simulation and verification of the design from within the Vivado HLS environment. All the files are located in the sources subfolder in the project directory.
The project is pre-configured to be built for a clock period of 6.66 ns and targets an XCVX690T device by default. You can change these settings through the solution and project settings.
Building the project generates a detailed report, an example of which is shown in Figure 9. This report contains critical information about the generated design.
Important: Examine the report to determine whether or not Vivado HLS can produce a design that meets all the set constraints.
Performance Reporting
The first part of the report provides performance estimates. This includes information on the throughput and frequency of the design. In this case, the design meets timing (achieved clock period 6.38 ns).
Important: Keep in mind that timing value is a Vivado HLS estimate and might vary from the post synthesis or post place and route value. This is because Vivado HLS estimates timing by using fixed timing values for different operation types and devices. This results in two sources of discrepancy:
- Vivado HLS does not perform full logic synthesis, thus cannot take advantage of simplifications of logic that might result from it.
- Vivado HLS does not take into account detailed placement and routing information, which cannot be available at the time.
A more precise estimate can be obtained by evaluating the design through the export menu (click the Solution menu, then choose Export RTL, and on the pop-up window check the Evaluate check box. Click OK to run the evaluation). This runs logic synthesis, place, and then route on the design. Again, the resulting timing might vary from what is achieved in the final design, in which placement and routing is changed to accommodate any additional logic any additional logic found in the non-HLS portion of your design, though this estimation is inherently more precise than the one generated by Vivado HLS synthesis. Typically, state machine-based designs exhibit better timing after place and route compared to Vivado HLS synthesis reports, so utilizing this additional step occasionally to glimpse the post synthesis design performance is recommended.
Other performance information that can be obtained from the report is the final latency and II value for this design. The example design used in this application note has a total latency of 13 clock cycles and an II = 1, as requested. If any of the constraints are not met, you can find more information about the issue in the console window. You can analyze latency further using the Analysis view. This can provide a detailed overview of the scheduling of the module and be used to account for each cycle of latency reported by Vivado synthesis. Clicking on each module name opens a detailed view specific to that module.
Going back to the main report, you can click **Instance** under Detail to obtain a breakdown of the latency and II information per module. Clicking on each module name opens a separate report for that particular module. These reports match the main report in format and content type. You can use this part of the report to identify which module in the design fails the II target, or to determine the latency incurred by each module.
**Example of Auto-generated Synthesis Report**
As mentioned earlier, the Vivado Design Suite automatically generates a report after you build and synthesize your project. An example report is shown in the figure below.
**Synthesis Report for 'hlsExample'**
### General Information
- **Date:** Thu Jan 21 12:30:25 2014
- **Version:** 2013.4 (build date: Mon Dec 09 17:07:59 PM 2013)
- **Project:** appNote_new
- **Solution:** solution1
- **Product family:** virtex7 virtex7_fpi6
- **Target device:** xc7vx690tffg1761-2
### Performance Estimates
#### Timing (ns)
- **Summary**
- **Clock** | **Target** | **Estimated** | **Uncertainty**
- default | 6.66 | 6.38 | 0.83
#### Latency (clock cycles)
- **Summary**
- **Latency** | **interval**
- **min** | **max** | **min** | **max** | **Type**
- 13 | 13 | 1 | 1 | dataflow
### Utilization Estimates
#### Summary
- **Name** | **BRAM_18K** | **DSP48E** | **FF** | **LUT**
- Expression | - | - | - | -
- FIFO | 8 | - | 486 | 2542
- Instance | - | - | 4303 | 3827
- Memory | - | - | - | -
- Multiplexer | - | - | - | -
- Register | - | - | 18 | -
- ShiftMemory | - | - | - | -
- **Total** | 8 | 0 | 4807 | 6399
- **Available** | 2940 | 3600 | 866400 | 433200
- **Utilization (%)** | ~0 | 0 | ~0 | 1
#### Detail
- **Instance**
- **Memory**
**Figure 9:** Example Synthesis Report for the Example Design
Resource Utilization Reporting
The second part of the report contains resource utilization estimates. These estimates might (and usually do) differ from the final values that result after running logic synthesis on the design for reasons similar to those of the performance estimation. Furthermore, the breakdown in the utilization might vary because logic synthesis might decide to use LUTs to implement something for which Vivado HLS estimated that a DSP48 slice will be used, or logic synthesis might use distributed RAM instead of block RAM for the implementation of a queue. Expanding the Instance menu lists the resource use for each submodule, as it does in the Performance section. Expanding the FIFO menu provides detailed information on all internal streams in the design and the resources they use.
The final part of the report (not shown here) lists the module interfaces. In this case, there are two AXI4-Stream interfaces, inData and outData, two ap_fifo interfaces, queryIP and returnMAC, along with a host of control signals used to drive the generated Vivado HLS core.
To facilitate C and C/RTL verification, the example design includes a test scenario that exercises all the four paths present in the design. In this scenario, packets are read from the input file (called in.dat) and are injected in the systems input queue. These packets include ARP requests, ICMP requests, and Transmission Control Protocol (TCP) packets. The ARP and ICMP requests are answered by the system, while the TCP packet is not recognized and looped back. At the end of the test an IP address is written into the queryIP queue and the ARP server produces and sends an ARP request corresponding to that address. The output from the simulation is compared with a golden output file called gold.dat. Thus, running the C simulation involves (after successfully building the project, of course) navigating to the /project_dir/solution1/csim/build folder and typing the following commands (when using any Linux system):
```
./vhlsExample
/project_dir/sources/csim/in.dat
/project_dir/sources/csim/queryReply.dat
/project_dir/sources/csim/gold.dat
/project_dir/sources/csim/out.dat
```
The next step following successful C verification is to use the C/RTL co-simulation to ensure correct functionality of the generated RTL. Vivado HLS allows seamless transition from the C to the C/RTL co-simulation. The tool automatically generates an appropriate RTL test bench from the provided C one, executes the RTL simulation, and then compares the output to the golden one, just like in the C simulation.
The final step before integrating the example design in the Vivado Design Suite for logic synthesis is exporting the core. To do this, select Export RTL on the Solution menu. From the various format options select IP Catalog. Click OK and Vivado HLS generates the IP core in the standard IP-XACT format. The generated files are located in the /project_dir/solution1/impl/ip folder.
To facilitate execution of all the design steps described above, the example design includes a Tcl script that you can use to perform all the above steps in sequence. The script is called run_hls.tcl. To execute the script, type vivado_hls -f run_hls.tcl in the console.
Using Vivado Design Suite to Implement the Design
After generating the core in Vivado HLS, the core needs to be imported into the provided Vivado Design Suite project. This project was generated using Vivado Design Suite 2014.1, although it should be possible to use newer versions. It targets the Xilinx VC709 development board [Ref 3].
Opening the Vivado Design Suite project accompanying the Vivado HLS project brings up a screen containing the design sources for the project. These include the network interface and its accompanying clock generation signals, the Vivado HLS module, and two modules that facilitate testing of the design.
The Vivado HLS-generated IP core appears in the sources pane marked with a red exclamation mark, which denotes that the IP core cannot be found in any of the IP repositories currently in use with this Vivado Design Suite installation. This can be resolved by adding the IP core generated from Vivado HLS in the previous steps in an existing or new repository. It is not, however, necessary for synthesizing and implementing the design because the design files for the core are already included in the example design project.
The two additional test bench modules used are the debouncer, which eliminates voltage level oscillations from pressing the push buttons on the board, and the *queryGenerator*, which produces an IP address, the MAC address of which is being requested from the ARP server, and reads the reply from the Vivado HLS module. See Figure 10.
The example design is readily implemented using Vivado tools. Generating a bitstream and downloading it to a VC709 evaluation board immediately starts the system.
**Testing the Example Design on the VC709 Evaluation Board**
To test the design on the VC709 evaluation board, you must connect the board to a PC, which serves as the counterpart and produces the test stimuli. This can be done either directly if the PC has an SFP port, or over a switch which contains both SFP and standard Ethernet ports. You can use an open-source program, such as Wireshark, to monitor traffic on any network interface. This allows you to monitor the packets that are exchanged between the PC and the VC709 evaluation board to verify that the correct information exchange takes place.
The simplest scenario that can be used to test the system is to send it a simple ping request by typing:
```bash
ping 1.1.1.1
```
at a Linux or Windows terminal. This initially sends an ARP request to determine the MAC address belonging to this IP address. The Vivado HLS module responds with an ARP reply containing the MAC corresponding to the VC709 evaluation board. The PC then sends a ping request to the VC709 evaluation board, which produces a ping reply to each ping request until you interrupt the process on the PC side. Note that only the first ping request triggers an ARP request. In all subsequent ping requests, the MAC address for the VC709 evaluation board is found on the PCs ARP table, and thus no additional ARP requests are triggered. Testing isolated ARP requests is possible by using the `arping` command in Linux:
```bash
arping -I ethInterfaceName 1.1.1.1
```
Testing the ARP request functionality is done by using the south push button on the right side of the VC709 evaluation board. Every time the button is pressed, a query for the IP address 1.1.1.2 is generated. If no additional configuration is performed at the PC, these requests time out and fail because the PC ARP table does not contain a MAC address corresponding to this IP address. The `arp` command has to be used to add this IP address to the PC ARP table:
```
arp -s 1.1.1.2 AB:90:78:56:34:12
```
Alternatively, the PC IP address on that interface can be set to 1.1.1.10. All other intermittent traffic sent from the PC to the VC709 evaluation board is looped back to the PC without changes.
---
**Conclusions**
Vivado HLS allows quick and easy implementation of protocol processing designs on FPGAs using C/C++ and leveraging the productivity increases offered by higher level languages as opposed to traditional RTL. You can take advantage of additional features offered by Vivado HLS to target the desired architecture and to quickly explore design trade-offs without rewriting the source code. Such features include:
- Straightforward system assembly using C functions
- Data exchange over streams (which offer standardized FIFO-like interfaces and free-flow control)
- Vivado HLS pragmas
As a vehicle for explaining the basic concepts of such designs, this application note uses a simple ARP/ICMP server that replies to ping and ARP requests and resolves IP address queries. You can synthesize the example design with Vivado HLS and integrate the design into the accompanying infrastructure, which allows it to be tested using a Xilinx VC709 evaluation board. This demonstrates that Vivado HLS designed modules can perform protocol processing at line rates of 10 Gb/s and higher.
---
**References**
The following references are used in this application note:
1. Software Defined Specification Environment for Networking (SDNet)
---
**Revision History**
The following table shows the revision history for this document.
<table>
<thead>
<tr>
<th>Date</th>
<th>Version</th>
<th>Description of Revisions</th>
</tr>
</thead>
<tbody>
<tr>
<td>08/08/2014</td>
<td>1.0.1</td>
<td>Corrected design files link.</td>
</tr>
<tr>
<td>05/30/2014</td>
<td>1.0</td>
<td>Initial Xilinx release.</td>
</tr>
</tbody>
</table>
Notice of Disclaimer
The information disclosed to you hereunder (the “Materials”) is provided solely for the selection and use of Xilinx products. To the maximum extent permitted by applicable law: (1) Materials are made available "AS IS" and with all faults, Xilinx hereby DISCLAIMS ALL WARRANTIES AND CONDITIONS, EXPRESS, IMPLIED, OR STATUTORY, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, OR FITNESS FOR ANY PARTICULAR PURPOSE; and (2) Xilinx shall not be liable (whether in contract or tort, including negligence, or under any other theory of liability) for any loss or damage of any kind or nature related to, arising under, or in connection with, the Materials (including your use of the Materials), including for any direct, indirect, special, incidental, or consequential loss or damage (including loss of data, profits, goodwill, or any type of loss or damage suffered as a result of any action brought by a third party) even if such damage or loss was reasonably foreseeable or Xilinx had been advised of the possibility of the same. Xilinx assumes no obligation to correct any errors contained in the Materials or to notify you of updates to the Materials or to product specifications. You may not reproduce, modify, distribute, or publicly display the Materials without prior written consent. Certain products are subject to the terms and conditions of the Limited Warranties which can be viewed at http://www.xilinx.com/warranty.htm; IP cores may be subject to warranty and support terms contained in a license issued to you by Xilinx. Xilinx products are not designed or intended to be fail-safe or for use in any application requiring fail-safe performance; you assume sole risk and liability for use of Xilinx products in Critical Applications: http://www.xilinx.com/warranty.htm#critapps.
|
{"Source-Url": "https://www.xilinx.com/support/documentation/application_notes/xapp1209-designing-protocol-processing-systems-hls.pdf", "len_cl100k_base": 14200, "olmocr-version": "0.1.48", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 54371, "total-output-tokens": 15210, "length": "2e13", "weborganizer": {"__label__adult": 0.0005083084106445312, "__label__art_design": 0.0008115768432617188, "__label__crime_law": 0.00035858154296875, "__label__education_jobs": 0.0006289482116699219, "__label__entertainment": 0.00012636184692382812, "__label__fashion_beauty": 0.000232696533203125, "__label__finance_business": 0.00028395652770996094, "__label__food_dining": 0.0003919601440429687, "__label__games": 0.0013570785522460938, "__label__hardware": 0.02581787109375, "__label__health": 0.0003888607025146485, "__label__history": 0.0004138946533203125, "__label__home_hobbies": 0.00023448467254638672, "__label__industrial": 0.0019083023071289065, "__label__literature": 0.0001825094223022461, "__label__politics": 0.0003159046173095703, "__label__religion": 0.0008473396301269531, "__label__science_tech": 0.1141357421875, "__label__social_life": 6.586313247680664e-05, "__label__software": 0.016021728515625, "__label__software_dev": 0.83349609375, "__label__sports_fitness": 0.0004045963287353515, "__label__transportation": 0.0008907318115234375, "__label__travel": 0.00024628639221191406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63388, 0.03313]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63388, 0.70056]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63388, 0.85355]], "google_gemma-3-12b-it_contains_pii": [[0, 2565, false], [2565, 5431, null], [5431, 7675, null], [7675, 10392, null], [10392, 14477, null], [14477, 17183, null], [17183, 20601, null], [20601, 23266, null], [23266, 24443, null], [24443, 26766, null], [26766, 29361, null], [29361, 32485, null], [32485, 35449, null], [35449, 38477, null], [38477, 40560, null], [40560, 42618, null], [42618, 44293, null], [44293, 46998, null], [46998, 50904, null], [50904, 52698, null], [52698, 56589, null], [56589, 59110, null], [59110, 61549, null], [61549, 63388, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2565, true], [2565, 5431, null], [5431, 7675, null], [7675, 10392, null], [10392, 14477, null], [14477, 17183, null], [17183, 20601, null], [20601, 23266, null], [23266, 24443, null], [24443, 26766, null], [26766, 29361, null], [29361, 32485, null], [32485, 35449, null], [35449, 38477, null], [38477, 40560, null], [40560, 42618, null], [42618, 44293, null], [44293, 46998, null], [46998, 50904, null], [50904, 52698, null], [52698, 56589, null], [56589, 59110, null], [59110, 61549, null], [61549, 63388, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 63388, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63388, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63388, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63388, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63388, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63388, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63388, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63388, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63388, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63388, null]], "pdf_page_numbers": [[0, 2565, 1], [2565, 5431, 2], [5431, 7675, 3], [7675, 10392, 4], [10392, 14477, 5], [14477, 17183, 6], [17183, 20601, 7], [20601, 23266, 8], [23266, 24443, 9], [24443, 26766, 10], [26766, 29361, 11], [29361, 32485, 12], [32485, 35449, 13], [35449, 38477, 14], [38477, 40560, 15], [40560, 42618, 16], [42618, 44293, 17], [44293, 46998, 18], [46998, 50904, 19], [50904, 52698, 20], [52698, 56589, 21], [56589, 59110, 22], [59110, 61549, 23], [61549, 63388, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63388, 0.10078]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
23af277cc12249749f20c3c0fd0c9454061bd364
|
ABSTRACT
Checking the compliance of a business process execution with respect to a set of regulations is an important issue in several settings. A common way of representing the expected behavior of a process is to describe it as a set of business constraints. Runtime verification and monitoring facilities allow us to continuously determine the state of constraints on the current process execution, and to promptly detect violations at runtime. A plethora of studies has demonstrated that in several settings business constraints can be formalized in terms of temporal logic rules. However, in virtually all existing works the process behavior is mainly modeled in terms of control-flow rules, neglecting the equally important data perspective. In this paper, we overcome this limitation by presenting a novel monitoring approach that tracks streams of process events (that possibly carry data) and verifies if the process execution is compliant with a set of data-aware business constraints, namely constraints not only referring to the temporal evolution of events, but also to the temporal evolution of data. The framework is based on the formal specification of business constraints in terms of first-order linear temporal logic rules. Operationally, these rules are translated into finite state automata for dynamically reasoning on partial, evolving execution traces. We show the versatility of our approach by formalizing (the data-aware extension of) Declare, a declarative, constraint-based process modeling language, and by demonstrating its application on a concrete case dealing with web security.
Categories and Subject Descriptors
F.4.1 [Mathematical Logic and Formal Languages]: Mathematical logic—Temporal logic; H.4.1 [Information System Applications]: Office Automation—Workflow Management; D.2.3 [Software Engineering]: Software/Program Verification—Formal Methods; E.0 [Data]: General
General Terms
Design, Languages, Management, Verification
Keywords
Compliance Monitoring, Runtime Verification, First-order Linear Temporal Logic, Operational Decision Support
1. INTRODUCTION
Checking the compliance of a business process execution with respect to a set of (dynamic) regulations is an important issue in several settings. A compliance model is, in general, constituted by a set of business constraints that can be used to monitor at runtime whether a (running) process instance behaves as expected or not. The declarative nature of business constraints makes it difficult to use procedural languages to describe compliance models [11]. First, the integration of diverse and heterogeneous constraints would quickly make models extremely complex and difficult to maintain. Second, business constraints often target uncontrollable aspects, such as activities carried out by internal autonomous actors or even by external independent entities.
These characteristics make constraint-based approaches suitable for capturing loosely structured and knowledge-intensive processes, such as the treatment of a patient in a hospital. In this case, there is no strict process that physicians have to follow, but only some (underspecified) guidelines and clinical pathways. Even though it is not possible to guarantee that clinical experts behave as specified by the guidelines, it is still crucial to timely detect whether their behavior is aligned with the expected one and, if not, to promptly detect and report deviations. This calls for runtime verification and monitoring as a flexible form of execution support. Since the main focus here is on the dynamics of an evolving system as time flows, LTL (Linear Temporal Logic) has been extensively proposed as a suitable framework for formalizing the properties to be monitored, providing at the same time effective verification procedures. A distinctive feature in the application of LTL to business constraints modeling and monitoring, is that process executions do not continue indefinitely, but eventually reach a termination point. This, in turn, requires to shift from standard LTL over infinite traces (and corresponding techniques based on Büchi automata), to LTL over finite traces (and corresponding techniques based on finite state automata).
In this paper, we represent business constraints using Declare [23, 31]. Declare is a declarative language that combines a formal semantics grounded in LTL with a graphical representation for users. Differently from procedural models, which explicitly enumerate the allowed execution traces (considering all the other to be implicitly forbidden), a Declare model describes a process in terms of a set of constraints...
that must be satisfied during the process execution. Declare
follows a sort of “open world assumption” where every ac-
tivity can be freely performed unless it is forbidden. One
key limitation of Declare is that it completely neglects the
data perspective [29], which is crucial in several settings (e.g.,
healthcare, web security as well as financial institutions that
need solid auditing techniques to prevent accounting scan-
dals and frauds). In this light, a compliance model should
also include data-aware constraints to guarantee the correct
execution of a process in terms of control and data flow.
Since, the standard LTL-based formalization of Declare is not
sufficient to represent data-aware constraints, it consequently
becomes necessary to reformulate them by relying on a more
expressive formalism. A first attempt in this direction has
been proposed, in [29,30], where the Event Calculus (EC)
is used to formalize data-aware Declare constraints. The
limitation of this approach, however, is that it is unable to
detect violations at the earliest possible time and also viola-
tions that cannot be ascribed to an individual constraint but
are determined by the interplay of two or more constraints
(cf. [23,27]). To address this issue, the approach presented
in this paper aims at reconciling the data-aware nature of
[29,30] and the advanced reasoning capabilities of [23,27].
On the one hand, this is done by adopting a (finite-trace
variant) of the first-order extension of LTL, which we call
FOLTL, to formalize data-aware Declare constraints; on the
other hand, a characterization of this logic in terms of finite
state automata is exploited to carry out the actual monitor-
ing task. Employing finite state automata for monitoring
FOLTL rules allows us to lift the technique used in [23]
for the early detection of violations to the data-aware case.
The standard FOLTL semantics produces as output a boolean value representing whether the current (finite) trace complies with the monitored constraint or not. In the context of monitoring, however, it is typically required to provide a more fine-grained analysis, which distinguishes between a temporary vs a permanent satisfaction/violation of the constraint, by reasoning on the finite prefixes of the evolving trace. This reflects that, when checking evolving traces, it is not always possible to produce at runtime a definitive answer about compliance. To tackle this issue, we further extend FOLTL for finite traces with the four-valued semantics of RV-FOLTL (FOLTL for Runtime Verification). This makes the approach able to provide advanced diagnostics to the end users, reporting which constraints are violated and why.
More specifically, our approach does not only communicate if a running trace is currently compliant with the constraint model, but also computes the state of each constraint, which is one among temporarily satisfied, permanently satisfied, temporarily violated and permanently violated. The first state attests that the monitored process instance is currently compliant with the constraint, but it can violate the constraint in the future. The second state indicates that the constraint is satisfied permanently, i.e., it is no longer possible to violate the constraint. The third state models that the constraint is currently violated, but it is possible to bring it back to a satisfied state by continuing the trace. The last state models the situation where the violation cannot be repaired anymore.
The paper is structured as follows. Section 2 introduces the basic Declare notation and gives an overview on existing analysis tools for Declare. Section 3 recalls FOLTL, and shows how to apply it for formalizing data-aware Declare. In Section 4, the monitoring technique is thoroughly investigated.
### Table 1: Graphical notation and LTL formalization of some Declare templates
<table>
<thead>
<tr>
<th>TEMPLATE</th>
<th>FORMALIZATION</th>
</tr>
</thead>
<tbody>
<tr>
<td>existence(A)</td>
<td>$\diamond A$</td>
</tr>
<tr>
<td>absence(A)</td>
<td>$\neg \diamond A$</td>
</tr>
<tr>
<td>choice(A,B)</td>
<td>$\diamond A \lor B$</td>
</tr>
<tr>
<td>exclusive choice(A,B)</td>
<td>$(\diamond A \lor B) \land \neg (\diamond A \land \diamond B)$</td>
</tr>
<tr>
<td>responded existence(A,B)</td>
<td>$\diamond A \rightarrow B$</td>
</tr>
<tr>
<td>response(A,B)</td>
<td>$\square (A \rightarrow B)$</td>
</tr>
<tr>
<td>precedence(A,B)</td>
<td>$\neg BW A$</td>
</tr>
<tr>
<td>alternate response(A,B)</td>
<td>$\square (A \rightarrow \neg(\neg A) B)$</td>
</tr>
<tr>
<td>alternate precedence(A,B)</td>
<td>$(\neg BW A) \land \square (B \rightarrow \neg (BW A))$</td>
</tr>
<tr>
<td>chain response(A,B)</td>
<td>$\square (A \rightarrow \square B)$</td>
</tr>
<tr>
<td>chain precedence(A,B)</td>
<td>$\square (\neg B \rightarrow A)$</td>
</tr>
<tr>
<td>not resp. existence(A,B)</td>
<td>$\neg A \rightarrow \neg \diamond B$</td>
</tr>
<tr>
<td>not response(A,B)</td>
<td>$\square (A \rightarrow \neg \diamond B)$</td>
</tr>
<tr>
<td>not precedence(A,B)</td>
<td>$\square (A \rightarrow \neg \diamond B)$</td>
</tr>
<tr>
<td>not chain response(A,B)</td>
<td>$\square (A \rightarrow \neg \diamond B)$</td>
</tr>
<tr>
<td>not chain precedence(A,B)</td>
<td>$\square (A \rightarrow \neg \diamond B)$</td>
</tr>
</tbody>
</table>
Section 5 grounds our approach on a concrete case study in
the context of web security. Section 6 further discusses the
concrete of the framework and its versatility. Finally, Section 7 concludes the paper and spells out directions for future work.
### 2. PRELIMINARIES
In this section, we introduce some background notions about Declare, and give an overview of the existing techniques for the formal analysis and runtime monitoring of Declare models.
#### 2.1 Declare
Declare is a declarative process modeling language originally introduced by Pesic and van der Aalst in [33]. Instead of explicitly specifying the flow of the interactions among process activities, Declare describes a set of constraints that must be satisfied throughout the process execution. The possible orderings of activities are implicitly specified by constraints and anything that does not violate them is possible during execution. In comparison with procedural approaches that produce “closed” models, i.e., all what is not explicitly specified is forbidden, Declare models are “open” and tend to offer more possibilities for the execution. In this way, Declare enjoys flexibility and is very suitable for specifying compliance models that are used to check if the behavior of a system complies with desired regulations. The compliance model defines the rules related to a single process instance, and the overall expectation is that all instances comply with the model.
A Declare model consists of a set of constraints applied to (atomic) activities. Constraints, in turn, are based on templates. Templates are patterns that define parameterized classes of properties, and constraints are their concrete instantiations.
Table 1 summarizes some Declare templates (the reader can refer to [33] for a full description of the language), where the $\diamond$, $\square$ and $\ltl$ LTL operators have the following intuitive meaning (see Section 3.1 for the formal
semantics): formula $\diamond \phi_1$ means that $\phi_1$ holds sometime in the future, $\square \phi_1$ says that $\phi_1$ holds forever and lastly $\phi_1 \lor \phi_2$ means that either $\phi_1$ holds forever or sometime in the future $\phi_2$ will hold and until that moment $\phi_1$ holds (with $\phi_1$ and $\phi_2$ LTL formulas).
Templates existance and absence require that activity $A$ occurs at least once and never occurs inside every process instance, respectively. Templates choice and exclusive choice indicate that $A$ or $B$ occur eventually in each process instance. The exclusive choice template is more restrictive because it forbids $A$ and $B$ to occur both in the same process instance. The responded existance template specifies that if $A$ occurs, then $B$ should also occur (either before or after $A$). The response template specifies that when $A$ occurs, then $B$ should eventually occur after $A$. The precedence template indicates that $B$ should occur only if $A$ has occurred before. Templates alternate response and alternate precedence strengthen the response and precedence templates respectively by specifying that activities must alternate without repetitions in between. Even stronger ordering relations are specified by templates chain response and chain precedence. These templates require that the occurrences of the two activities ($A$ and $B$) are next to each other. Declare also includes some negative constraints to explicitly forbid the execution of activities. The not responded existance template indicates that if $A$ occurs in a process instance, $B$ cannot occur in the same instance. According to the not response template any occurrence of $A$ cannot be eventually followed by $B$, whereas the not precedence template requires that any occurrence of $B$ is not preceded by $A$. Finally, according to the not chain response and not chain precedence, $A$ and $B$ cannot occur one immediately after the other.
### 2.2 Analysis Tools for Declare
Several analysis plug-ins are available for Declare in the process mining tool ProM. In [24, 26], the authors propose an approach for monitoring Declare models based on finite state automata. This approach provides the same functionalities described in this paper but it is limited to standard Declare specifications (i.e., data are not supported). Differently from their approach, in the technique presented here a compliance model can also include data-aware business constraints. In [23], the authors define Timed Declare, an extension of Declare based on a metric temporal logic semantics. The approach relies on timed automata to monitor metric dynamic constraints, but again data-aware specifications are not supported. As already mentioned in the introduction, in [29] the EC is used for defining a data-aware semantics for Declare. Moreover, in [30], the authors propose an approach for monitoring data-aware Declare constraints at runtime based on this semantics. This approach also allows the verification of metric temporal constraints, i.e., constraints specifying required delays and deadlines. This expressiveness comes with the limitation that the EC does not guarantee decidability when reasoning on the possible future outcomes of a partial trace, and hence is only used to check the actual events received so far. Automata-based techniques makes it instead possible to early-identify violations. In [25], the authors relies on a first-order variant of LTL to specify a limited version of data-aware patterns. Such extended patterns are used as the target language for a process discovery algorithm, which produces data-aware Declare constraints from raw event logs. Our FOLTL formalization for Declare extends the one presented in [25] and is tailored to a formal semantics that makes it suitable for the runtime monitoring of evolving traces.
### 3. FO-LTL FOR DATA-AWARE DECLARE
Traditional LTL on finite traces is inadequate for expressing data-aware business constraints, as its propositional variables are not expressive enough to query complex data structures. We propose a first-order variant of LTL, called First-Order LTL (FOLTL), which merges the expressivity of first-order logic together with the LTL temporal modalities.
#### 3.1 Syntax of FOLTL
We assume a relational representation of data and we define the data schema $S$ as a set of relations, each one with an associated arity, and an interpretation domain $\Delta$, which is a fixed a-priori and finite set of constants. A database instance $I$ of $S$ interprets each relation symbol with arity $n$ as a subset of the cartesian product $\Delta^n$. Values in $\Delta$ are interpreted as themselves, blurring the distinctions between constants and values. Given a schema $S$, $\mathcal{I}$ denotes all possible database instances for $S$.
**Definition 1** (FOLTL Syntax). Given a data schema $S$, the set of closed FOLTL formulas $\Phi$ obeys to the following syntax:
$$
\Phi^f := \text{true} \mid \text{Atom} \mid \neg \Phi^f \mid \Phi^f \land \Phi^f \mid \forall x.\Phi^f
$$
$$
\Phi^t := \Phi^f \lor \Phi^t \mid \Phi^t \land \Phi^t \mid \Phi^f \lor \Phi^t
$$
$$
\Phi := \Phi^f \lor \Phi^t \lor \forall x.\Phi
$$
where $x$ is a variable symbol and $\text{Atom}$ is an atomic first-order formula or atom, i.e., a formula inductively defined as follows: (i) true is an atomic formula; (ii) if $t_1$ and $t_2$ are constants in $\Delta$ or variables, then $t_1 = t_2$ is an atomic formula, and (iii) if $t_1 \ldots t_n$ are constants or variables and $R \in S$ a relation symbol of arity $n$, then $R(t_1 \ldots t_n)$ is an atomic formula. Since $\Phi$ is closed, we assume that all variables symbols are in the scope of a quantifier.
Intuitively, $\diamond \Phi$ (next $\Phi$) says that $\Phi$ holds in the next instant, while $\Phi_1 \land \Phi_2$ ($\Phi_1$ until $\Phi_2$) says that there exists a future instant in which $\Phi_2$ will hold and, until that moment, $\Phi_1$ holds.
We define the logic symbols $\forall$ and $\exists$ as $\Phi_1 \forall \Phi_2 := \neg(\neg \Phi_1 \land \neg \Phi_2)$ and $\forall x.\Phi := \neg \forall x.\neg \Phi$. Moreover, the LTL temporal operators finally $\diamond$, globally $\square$ and weak until $\lnot$ are defined as: $\diamond \Phi := \text{true} \lor \Phi$; $\square \Phi := \neg \diamond \Phi$ and $\Phi \lor \Phi_2 := (\Phi_1 \land \Phi_2) \lor (\diamond \Phi_1)$. We observe that quantifiers for variables which occur in the scope of temporal operators are required to be in the front of the formula. We call such variables across-state variables.
#### 3.2 Semantic of FOLTL over Finite Traces
Our analysis is not only based on finite traces, but it is performed at runtime, meaning while such traces are evolving. Roughly speaking, we assume a business process that produces events and possibly modifies data: each time it does so, we take the trace seen so far, i.e., the history of events and data, and we evaluate it considering that the process execution can still continue. This evolving aspect has a significant impact on the evaluation function: at each step, indeed, the monitor may return truth values which have a
degree of uncertainty due to the fact that future executions are yet unknown.
Without loss of generality, in what follows, we assume that the events generated by the process are stored in the database instance, and hence they are treated like data. A more detailed explanation on how we deal with events is given in Section 3.3.
We now define the FOLTL semantics for finite traces that evaluates a FOLTL formula given a finite and completed trace. Then, we show how to use this semantics for building our evaluation function and monitoring progressing data instances.
Before showing the semantics of the language, we need to introduce the notion of assignment. An assignment $\eta$ is a function that associates to each free variable $x$ a value $\eta(x)$ in $\Delta$. Let $\eta$ be an assignment, then $\eta_{/d}$ is the assignment that agrees with $\eta$ except for the value $d \in \Delta$ that is now assigned to $x$. We denote with $I[\eta]$ the formula obtained from $\Phi$ by replacing variables symbols with values in $\eta$.
A finite trace of length $n+1$ for a data schema $\Sigma$ is a finite sequence $I_0, I_1, \ldots, I_n$ of database instances over $\Sigma$, i.e., is a function $\pi: \{0, \ldots, n\} \rightarrow \Sigma$ that assigns a database instance $\pi(i) \equiv I_i$ to each time instant $i \in \{0, \ldots, n\}$.
**Definition 2 (FOLTL finite-trace semantics).**
Given a FOLTL formula $\Phi$ over a schema $\Sigma$, a domain $\Delta$, an assignment $\eta$ and a finite trace $\pi$ of length $n+1$, we inductively define when $\Phi$ is true at an instant of time $0 \leq i \leq n$, written $(\pi, i, \eta) \models \Phi$, as follows:
- $(\pi, i, \eta) \models \text{true}$;
- $(\pi, i, \eta) \models \text{Atom} \iff (\pi(i), \eta) \models \text{Atom}$, where $(\pi(i), \eta) \models \text{Atom}$ is the usual evaluation function;
- $(\pi, i, \eta) \models \Phi_1 \land \Phi_2 \iff (\pi(i), \eta) \models \Phi_1$ and $(\pi(i), \eta) \models \Phi_2$;
- $(\pi, i, \eta) \models \forall x. \Phi \iff \text{for all } d \in \Delta, (\pi(i, \eta_{/d}) \models \Phi)$;
- $(\pi, i, \eta) \models \exists ! x. \Phi \iff i < n$ and $(\pi(i + 1, \eta) \models \Phi)$;
- $(\pi, i, \eta) \models \Phi_1 \cup \Phi_2 \iff \text{for some } i \leq j \leq n \text{ we have } (\pi(j, \eta)) \models \Phi_1$ and for all $i \leq k < j \text{ we have } (\pi(k, \eta)) \models \Phi_2$.
Furthermore, $(\pi, \eta) \models \Phi \iff (\pi, 0, \eta) \models \Phi$ and, when $\Phi$ is closed (which is our assumption for constraints), we can simply write $\pi \models \Phi$.
Notice that when a formula does not contain any temporal operator, its semantics corresponds to the traditional first-order semantics. Notice also that the domain $\Delta$ is the same for each instant of time (cf. [16] for a dissertation on different semantics for first-order modal logics). From the syntax, the semantics and the assumption of finite and fixed domain, we can translate every FOLTL formula into one in *temporal prenex normal form*, i.e., with all across-state quantifier in the front. From now on, we assume formulas to be in this form.
Since the interest of the verification community on runtime monitoring, different monitoring evaluation functions have been proposed [15, 10]. Here, we adapt IV-LTL [5] to our first-order setting, and we call it IV-FOLTL. Such a semantics is tightly related to the notion of bad and good prefixes introduced in [22].
Given a FOLTL formula $\Phi$, a bad prefix for $\Phi$ is a finite trace such that any (finite) extension of it does not satisfy $\Phi$. In other words, no matter the continuation of the prefix, the formula $\Phi$ will always be evaluated to false since that moment on. Analogously, a good prefix can be defined as a finite trace which, no matter its continuation, will always satisfy (together with any continuation) property $\Phi$.
Recalling that we are monitoring evolving executions, the definition of good and bad prefixes is particularly useful, because it allows us to evaluate the formula with a definitive truth value even if the trace is still evolving. Unfortunately, there are FOLTL formulas that do not have any good nor bad prefix, and hence cannot be evaluated as true or false until the current execution is actually finished. In such cases, we are still able to return a “temporary” truth value. In particular, we consider the partial trace $\pi$ seen so far as if it was completed and we evaluate it according to the semantics in Definition 2 if $\pi$ currently satisfies $\Phi$ but there is a possible prosecution of it that leads to falsifies $\Phi$, then we say that $\Phi$ is temporary satisfied; if, instead, $\pi$ currently falsifies $\Phi$ but there is a possible prosecution of it that leads to verify $\Phi$, then we say that $\Phi$ is temporary violated.
Notice that, while evaluating a progressing trace, at each time instant we do not know whether the trace will still evolve, or the current instance seen is the last one, i.e., the process execution is finished. Without loss of generality, we introduce a propositional variable $\text{Last}$ which is set to true by the external process when it terminates, and indicates that the current one is the closing instance of the trace.
**Definition 3 (IV-FOLTL semantics).**
Given a FOLTL formula $\Phi$ and a finite trace $\pi$ of current length $n$ (but possibly still progressing), the monitoring evaluation function of a formula $\Phi$ on $\pi$, denoted by $[\pi \models \Phi]$, is an element of the set $\{\text{true, false, temp_true, temp_false}\}$ defined as follows:
- $[\pi \models \Phi] := \text{true} \iff (\pi(n) \models \text{Last} \text{ and } \pi \models \Phi \text{ or } \pi(\text{last}) \not\models \text{Last}, \pi \models \Phi)$ and for all finite possible prosecution $\sigma$ we have that $\pi\sigma \models \Phi$, i.e., $\pi$ is good prefix for $\Phi$;
- $[\pi \models \Phi] := \text{false} \iff (\pi(n) \models \text{Last} \text{ and } \pi \not\models \Phi \text{ or } \pi(\text{last}) \models \text{Last}, \pi \not\models \Phi)$ and for all finite possible prosecution $\sigma$ we have that $\pi\sigma \not\models \Phi$, i.e., $\pi$ is bad prefix for $\Phi$;
- $[\pi \models \Phi] := \text{temp_true} \iff (\pi(\text{last}) \not\models \text{Last}, \pi \models \Phi$ and there exists a possible prosecution $\sigma$ such that $\pi\sigma \models \Phi$;
- $[\pi \models \Phi] := \text{temp_false} \iff (\pi(\text{last}) \not\models \text{Last}, \pi \not\models \Phi$ and there exists a possible prosecution $\sigma$ such that $\pi\sigma \not\models \Phi$.
No other cases are possible. In particular, if $\Phi$ is temporarily verified or temporarily falsified, there always exists both a prosecution that falsifies $\Phi$ and one that verifies it, otherwise $\pi$ would be a good or bad prefix, respectively.
### 3.3 Declare Patterns in FOLTL
We now ground our FOLTL-based approach to the case of Declare, extending it to accommodate not only control-flow aspects, but also data-related ones, in the style of [28, 29]. We remark that this is the first attempt to formalize data-aware Declare with temporal logics and exploit (finite state) automata-based techniques for monitoring.
As pointed out in [28, 29], a fundamental limitation of Declare in its basic form is the lack of data support: only (atomic) activities can be constrained. To overcome this limitation, [29] proposes an extension of the language to support data-aware activities and data conditions to augment the original Declare templates. The extension can be applied to both atomic and non-atomic activities, following the approach proposed in [30]. Here we focus on atomic activities only, briefly discussing the extension to non-atomic activities in Section 4.2. Nevertheless, the semantics we give here reconstructs faithfully the ideas presented in [30].
Table 2 shows how the basic Declare templates, extended with data, can be formalized using FO-LTL. The idea is to attach a payload that carries the data involved in execution of an activity. For example, the fact that customer $\text{john}$ closes an order identified by 123 can be represented by the activity instance $\text{close\_order(john, 123)}$. This corresponds to an instance of the activity $\text{close\_order}/2$ with payload $(\text{john}, 123)$, which in our model corresponds to a fact of relation $\text{close\_order}(\text{Cust}, O_{\text{id}})$ (called activity type). We then assume that relations in $S$ are partitioned into two sets: $A \cup R$, where $A$ is a set of activity relations (one per activity type), and $R$ contains the other (business) relevant relations of the domain of interest. When monitoring a concrete system execution, the extension of such relations is manipulated as follows: every time an activity $A$ is executed with payload $d \in \Delta$, (i) the content of all relations in $A$ is emptied, (ii) payload $d$ is inserted into relation $A \in A$, and (iii) the effects of the execution are incorporated and manipulated, the extension of relations in $R$ accordingly. Notice that this is a widely used mechanism to store payloads in data-aware processes (cf. [21]). To enforce the last bullet, we assume that the traced log of the system execution does not only lists which activities have been executed and with which payload, but also which facts are deleted and added by each activity execution. More specifically, we define an event as a tuple $(A(d), ADD, DEL)$, where $A$ is an activity in $A$, $d$ is the payload of the executed activity instance, constituted by elements from $\Delta$ and such that its size is compatible with the arity of $A \in A$, and $ADD/DEL$ contain a set of facts over $R$ that must respectively be added to and deleted from the current database. We assume that $ADD$ has higher priority than $DEL$ (i.e., if the same fact is asserted to be added and deleted in the same event, it is added).
With this notion at hand, we define a system log as a pair $(I_0, E)$, where $I_0$ is the (initial) database instance (defined in such a way that the extension of each activity type $A \in A$ is empty), and $E$ is a finite list of events $(e_1, \ldots, e_n)$. A system log maps into a trace $I_0, I_1, \ldots, I_n$ in the sense of Section 3.2 as follows: for each $I_i$ with $i > 0$, given $e_i = (A(d), ADD, DEL)$, we have $I_i = (I_{i-1} \cap DEL) \cup ADD \cup \{A(d)\}$, where $I_{i-1}$ is the database instance obtained from $I_{i-1}$ by considering only tuples of relations $R$. In this light, query $\xi_\Lambda(x) = A(\bar{p}) \land \Phi(\bar{y})$, issued over the current database $I$, returns false if $A$ is not the last-executed activity, or the answer of $\Phi(\bar{y})$ over $I$ if $A$ has been the last-executed activity, with payload $d$. Queries of the form $\xi_\Lambda(\bar{x})$ are the basic building components for the FO-LTL-based formalization of data-aware Declare templates: they combine a test over the execution of the involved activity, together with a query over the current database. Such queries replace the activity name propositions used in standard Declare (cf. Table 1). As shown in Table 2, first-order quantification is used as follows.
Existence, absence and (exclusive) choice templates existentially quantify over $\bar{x}$, asserting that there must exist a state where the target activity (or one of the target activities) is executed so as to satisfy query $\Phi$. For example, $\text{existence}(\text{Close\_order}(c, o_{\text{id}}) \land \text{Gold}(c))$ models that at least one gold customer is expected to close an order during a system execution. All the other (binary) templates with source $\xi_\Lambda(\bar{x}) = A(\bar{p}) \land \Phi_\Lambda(\bar{y})$ (with $\bar{p}_\Lambda \subseteq \bar{x}$ and target $\xi_B(\bar{x}, \bar{y}) = A(\bar{p}_B) \land \Phi_B(\bar{x}, \bar{y})$ (with $\bar{p}_B \subseteq \bar{x} \bar{y}$) universally quantify over $\bar{x}$ (with scope the entire constraint), and existentially quantify over $\bar{y}$, asserting that for every payload $\bar{p}_B$ of $B$, and every query answer $\bar{x}$ to $\Phi_B$, an execution of activity $B$ is expected to happen by satisfying the dynamic constraint imposed by the template, as well as the involved data-aware conditions over $\bar{p}_B$ and $\Phi_B(\bar{x}, \bar{y})$. It is worth noting that both the payload of $B$ as well as the query $\Phi_B$ could make use of some of the variables contained in $\bar{p}_B$ and/or $\bar{x}$. If this is the case, the common variables play the role of a correlation mechanism between the source and target activities/conditions. For example, constraint $\text{response}(\text{Close\_order}(c, o_{\text{id}}), \text{Pay\_order}(c', o_{\text{id}}) \land (c' = c \lor \text{Responsible}(c', c))$ states that whenever an order $o_{\text{id}}$ is closed by customer $c$, that order must be eventually paid by either $c$ herself, or by another customer $c'$ who is responsible for $c$. Observe that the constraint universally quantify over $c$ and $o_{\text{id}}$, whereas it existentially quantify over $c'$. Furthermore, $o_{\text{id}}$ and $c$ are used to correlate the source and target activities and conditions. Finally, as in the propositional case, negation templates express the negative version of relation templates.
4. THE MONITORING APPROACH
In this section, we show how to actually build the monitor for data-aware Declare constraints. The problem of verifying (both offline and at runtime) data-aware temporal constraints is theoretically challenging, being undecidable in general. It requires knowledge of both verification and databases, and only recently has been actually addressed by the scientific community [20][7][9][12]. Indeed, most of the literature on runtime monitoring focuses on checking propositional formulas [10][5][11], while the database community mainly studied offline analysis of temporal constraints on a database history [14][8]. In [3], open first-order temporal properties are monitored, and the technique proposed...
returns assignments that falsify the formula. However, the logic is too expressive for supporting satisfiability and, more important, there is no “lookahead” mechanism of possible future evolutions (automata are not used indeed) so the bad prefixes recognized are not minimal. Also, [19] investigates monitoring of first-order formulas, but a naive solution is adopted and no emphasis on complexity is placed.
4.1 Monitoring FOLTL Constraints
The recent work in [13] presents a flexible and automaton-based approach for runtime monitoring data-aware properties showing how to efficiently build a finite-state machine which recognizes good and bad prefixes of FOLTL formulas under the assumption of finite and fixed domain. One of the major issues when taking into account data is about efficiency. The main contribution of [13] is providing an EXPSPACE complexity in the size of the formula for building the monitor, while a naive approach would require exponential space in the size of the domain. Note that usually formulas are short, being written by humans, while the domain of constants (of a database) is large. We briefly discuss this approach and illustrate how to adapt it to our case.
The technique relies on the propositional Büchi automaton construction [2, 17] for a (propositional) LTL formula. However, since the language used in [13] is FOLTL, the notion of first-order (FO) automaton is introduced. A FO automaton is essentially built with the same procedure of a propositional Büchi automaton, but its transitions are labeled with first-order formulas with no temporal operators and its states contain data structures to smartly keep track of data. A Büchi automaton for a propositional LTL formula $Φ_p$ is a finite state machine which accepts the infinite traces satisfying $Φ_p$, by requiring that a final state is visited infinitely often. States of the automaton which does not lead to a cycle including a final state are marked as bad, as from them the accepting condition cannot ever be satisfied.
In order to recognize both the bad and good prefixes of a FOLTL formula $Φ$, two automata are needed: one is the FO automaton for $Φ$ and the other one is the FO automaton for $¬Φ$. The first one is transformed into a final state machine which recognizes the bad prefixes of $Φ$ by simply determining its bad states. Analogously, the automaton for $¬Φ$ is transformed into a finite state machine recognizing the bad prefixes of $¬Φ$, which are the good prefixes of $Φ$. The final monitor is the conjunction of the two.
As explained in [13], to monitor an evolving trace it is enough to navigate the resulting automaton’s states: from the current state(s), transitions satisfying the current database instance and the data in the current state (recall that transitions are labeled with first-order formulas) are identified, and the new state(s) of the automaton are updated accordingly (more details are provided in Section 4.3). The semantics used in [13] is called LTL$_3$ [5], as it accounts three truth values: if a bad state is reached, the formula is falsified; if a good state is reached the formula is verified and if none of the two, the monitor returns the truth value \textit{true}. This semantics differs from the one given in Definition 3 but the technique in [13] is flexible enough to be extended to match our purposes. In what follows, we first illustrate the traditional technique to monitor propositional formulas with the LTL semantics, and then we show how to adapt it to our first-order scenario using the ideas in [13].
4.2 Monitoring RV-LTL Constraints
Runtime verification of propositional LTL declarative process models under RV-LTL semantics has been studied in [27, 23]. The task is performed by means of a finite state machine which is built from a propositional LTL formula using the translation in [15]. This translation is, again, based on the propositional Büchi automaton for the formula and on the notion of good and bad prefixes. However, it presents significant differences with the analysis described in the previous section based on LTL$_3$ semantics. Indeed, while LTL$_3$ relies on the traditional infinite trace LTL semantics, the technique in [18] naturally follows from the finite LTL semantics in Definition 2. From the theoretical viewpoint, given a propositional LTL formula $Φ_p$, the finite-state machine for $Φ_p$ obtained as described in [15] recognizes all finite traces satisfying $Φ_p$. The algorithm simply navigates the automaton states and checks if the current state is final. If it is final, the trace is accepted, otherwise is rejected.
The work in [27, 23] adapts this algorithm to monitor evolving traces as follows. Automaton states are traversed as the trace is evolving, and at each step:
- if the execution is finished (i.e., \textit{Last} is true) and the current state is final or the execution is not finished, the current state is final and there is no path from it reaching a non-final state, then return \textit{true};
- if the execution is finished (i.e., \textit{Last} is true) and the current state is non-final or the execution is not finished, the current state is non-final and there is no path from it reaching a final state, then return \textit{false};
- if the trace is still evolving, the current state is final and there is a path from it reaching a non-final state, then return \textit{temp_true};
- if the trace is still evolving, the current state is non-final and there is a path from it reaching a final state, then return \textit{temp_false}.
It is easy to see that this technique implements the semantics in Definition 3. The difference between LTL$_3$ and RV-LTL semantics is more evident when we consider what happens when \textit{Last} holds. Since the semantics of LTL$_3$ is based on infinite traces, when the execution finishes and neither a good nor a bad prefix have been seen so far, the truth value of the monitoring remains undetermined, namely, \texttt{?}. On the contrary the RV-LTL semantics guarantees that either true or false is returned when the execution terminates.
4.3 FO Automaton for RV-LTL
The ideas presented in [13] are versatile and can be used on every kind of finite-state machine. We illustrate how they can be used to build an RV-LTL monitor for FOLTL formulas.
As a first step, since the domain of constants $Δ$ is finite, a FOLTL formula $Φ$ can be propositionalized, i.e., transformed into an equivalent propositional formula. Intuitively, the propositionalization (recursive) procedure $p$, when applied to a universal quantifier returns the conjunction of each assignments for the variables (analogously it returns the disjunction for the existential quantifier), and then it associates a propositional variable to each atom of $Φ$. Given that $Φ$ can be translated in temporal prefix normal form, formula $p(Φ)$ has the structure $\bigwedge_{d_0 \in Δ} \bigvee_{d_{n-1} \in Δ} \cdots \bigwedge_{d_{m} \in Δ} Ψ[Φ[x/d_1, y/d_2, \ldots, z/d_{n+m}]]$. This allows us to monitor $Ψ[Φ[x/d_1, y/d_2, \ldots, z/d_{n+m}]]$ for each assignment to across-state variables separately, which (when substituted to variables of $Φ$) returns a value among
<table>
<thead>
<tr>
<th>∧</th>
<th>true</th>
<th>false</th>
</tr>
</thead>
<tbody>
<tr>
<td>temp_true</td>
<td>temp_true</td>
<td>false</td>
</tr>
<tr>
<td>temp_false</td>
<td>temp_true</td>
<td>false</td>
</tr>
</tbody>
</table>
Table 3: Evaluation functions for the conjunction operator under the RV-LTL semantics
<table>
<thead>
<tr>
<th>∨</th>
<th>true</th>
<th>false</th>
</tr>
</thead>
<tbody>
<tr>
<td>temp_true</td>
<td>true</td>
<td>temp_true</td>
</tr>
<tr>
<td>temp_false</td>
<td>true</td>
<td>temp_false</td>
</tr>
</tbody>
</table>
Table 4: Evaluation functions for the disjunction operator under the RV-LTL semantics
\{true, false, temp_true, temp_false\}. The result of the whole formula Φ is then obtained as in Table 3 and 4.
Based on the general technique presented in [13], we can use an automaton plus auxiliary data structures to monitor each assignment. Let Φ be a FO/LTL formula with all acoss-state quantifier in the front, being in temporal prenex normal form. We first get rid of them obtaining a formula Φ. Then, we build the monitor A for Φ following the procedure in [18] by treating the atomic formulas of Φ as propositional symbols. The resulting FO automaton A is likewise a propositional one, except for its transitions that are labeled with possibly open (because we got rid of the quantifiers) first-order formulas.
Also, we need to keep track of data. Let η be the set of all assignment to across-state variables for Φ. Each state s of A is marked with a set of assignments given by a marking function m, and at each step such a marking is updated according to the new database instance presented as input. At the beginning, the initial state s0 is marked with all assignments, namely m(s0) = η. When a new event e = (A(d1), ADDI, DELI) is presented as input, we first compute the new database instance I0 from e and I−1 as described in Section 3.3. Then we check, for each state s of A and for each assignment η ∈ m(s) which outgoing transition is satisfied by I0. Recalling that a transition t → s′ is labeled with an open first-order formula γ, we check whether I0 |= γ[η]: if it does, then we move the assignment η to state s′. When the new marking has been computed for all state, we perform the analysis described in the previous section, i.e., for each assignment η we check if there exists a path p that leads η to a final state, in order to assign a truth value in \{true, false, temp_true, temp_false\} to η. Notice that in doing so, free variables of transitions involved in p has to be assigned according to η (cf. Section 5). Finally, we compose such values in order to evaluate the overall formula. This result is the output of the monitor and implements the RV-FOLTL runtime semantics in Definition 3.
5. APPLICATION TO WEB SECURITY
We believe the runtime monitoring technique presented here can be widely used in a broad range of security scenarios, where, to the best of our knowledge, no sophisticated analysis based on temporal formulas (and their corresponding automata) has been yet in place.
For example, with the increasing attention of governments in open-intelligence analysis (OSINT), public forums administrators would decline their responsibility for risky behavior of group of users. To this purpose, administrators want to automatically check if (temporal) forum constraints are met by users. We assume a database schema containing the following set R of business relevant relations:
- Users(usr, cntr), the list of people registered to the forum;
- UntsdCntr(cntr), the list of countries whose government is not trusted by intergovernmental organizations;
- BnndWrd(str), the list of banned words.
We also assume the system processes events of different types, which capture different activities performed by the users. Hence, we have the following activity relations A:
- Post(usr, str) stores the payload of post events, which (singleton) tuple represents a user usr posting a new comment str;
- Login(usr, cntr) captures the login event of usr from a device located in cntr;
- Delete(usr, str) stores the payload of delete events, which deletes str previously written by usr.
Forum administrator checks the following constraints:
- alternate response: users cannot login outside their own country unless they delete all post showing banned words. ∀usr, str.∀((Post(usr, str) ∧ BnndWrd(str)) → (¬∃usr.cntr>Login(usr, cntr) ∧ User(usr, usrc) ∧ cntr ≠ usrc)) ∧ Delete(usr, str));
- Absence: user from untrusted cannot remove posts. ∀usr, str, cntr. Delete(usr, str) ∧ User(usr, cntr) ∧ UntsdCntr(cntr)
- Response: posts with banned words have to be eventually deleted. ∀usr, str.∀((Post(usr, str) ∧ BnndWrd(str)) → Delete(usr, str));
Moreover we assume a sort of safety constraints that once a user is registered, it will be forever, and once an untrusted country is added, it will be forever, which can be expressed in FOLTL as: ∀usr, cntr.∀(User(usr, cntr) → ∃User(usr, cntr)) and ∀cntr.∀(UntsdCntr(cntr) → ∃UntsdCntr(cntr)) respectively. We observe that, even if the above formulas do not mention events (and indeed they do not follow any Declare pattern) they can still be expressed in FOLTL to constraint the evolution of data.
5.1 Construction of the FO Automaton
For the sake of simplicity, we now show how to get the FO automaton that monitors the conjunction of the absence and response formulas only, along with the two safety constraints above. The prenex normal form of their conjunction is:
Φ : ∀usr2, str2, usrc3, cntr4
\(\Box(\neg\Box\exists usr, cntr, str,(Delete(usr, str)∧ User(usr, cntr) ∧ UntsdCntr(cntr)))∧\)
\(\Box(((Post(usr2, str2) ∧ BnndWrd(str2)) → \Box Delete(usr2, str2)))∧\)
\(\Box (User(usr3, cntr3) → ∃User(usr3, cntr3))∧\)
\(\Box (UntsdCntr(cntr4) → ∃UntsdCntr(cntr4)))\)
Then, we discard all the quantifiers in the front and we build the FO automaton of the formula considering atoms as propositional variables. The FO automaton for Φ has 8 states and 72 transitions. For lack of space Figure 1 shows only the fragment of the automaton that is meaningful for our example (we pruned transitions which do not satisfy the antecedent of the safety constraints and transitions leading
to non-final sink states). Double circled states \( q_0 \) and \( q_1 \) are final, and \( q_0 \) is also initial.
5.2 Monitoring Sample Process Instances
In what follows, we consider domain \( \Delta = \{ u_1, u_2, c_1, c_2, str_1, str_2 \} \), the following initial database instance \( I_0 \):
\[
\begin{align*}
\{ & \text{User}(u_1, c_1), \text{User}(u_2, c_2), \text{UntstdCntr}(c_2) \} \\
& \text{BanndWrd}(s_1)
\end{align*}
\]
and we simulate the execution of the algorithm as new events are presented as input.
Let \( \eta \) be the set of all assignment for the across-states variables \( (\text{usr}, \text{str}, \text{ctr}) \), the following initial database instance \( I_1 \):
\[
\begin{align*}
\{ & \text{User}(u_1, c_1), \text{User}(u_2, c_2), \text{UntstdCntr}(c_2) \} \\
& \text{BanndWrd}(s_1), \text{Post}(u_1, s_5)
\end{align*}
\]
and the new marking is \( m(q_0) = \eta_1, m(q_1) = \eta_2 \) and \( m(q_2) = \emptyset \), where \( \eta_1 = \{ (I_1, \eta) = (\neg \phi_1 \land \neg \phi_2 \land \neg \phi_4) \} \), the set of assignments satisfying the formula in looping transition in \( q_0 \) given \( I_1 \) (e.g., \( \text{usr}(u_2, s_2), \text{str}(s_2, s_2), \text{usr}(u_2, s_2, \text{ctr}(s_2, c_2), \text{ctr}(s_2, c_2), \text{ctr}(s_4, c_2)) \) and \( \eta_2 = \{ (I_2, \eta) = (\neg \phi_1 \land \neg \phi_2 \land \phi_4) \} \), i.e., the set of assignments satisfying the formula on transition from \( q_0 \) to \( q_1 \) given \( I_1 \) (e.g., assignment \( \text{usr}(u_2, s_2, \text{ctr}(s_2, c_2), \text{ctr}(s_2, c_2), \text{ctr}(s_4, c_2)) \)). Since both \( q_0 \) and \( q_1 \) are final states, no further analysis is required to check if a final state is reachable and the output of the monitor is \( \text{temp\_true}\). In this case, the fixed and finite domain assumption is not too strict: indeed, even though \( s_5 \notin \Delta \), the algorithm still works correctly, since strings which “activate” the constraints are those matching banned words (that, by consequence are already in the domain).
The next event is \( e_2 = \{ \text{Post}(u_2, s_2), \{ \text{BanndWrd}(s_1) \} \} \), hence, we get the following database instance \( I_2 \):
\[
\begin{align*}
\{ & \text{User}(u_1, c_1), \text{User}(u_2, c_2), \text{UntstdCntr}(c_2) \} \\
& \text{BanndWrd}(s_1), \text{BanndWrd}(s_2), \text{Post}(u_2, s_2)
\end{align*}
\]
Notice that not only a new event is seen, but also data has changed (a new banned word has been added). Now, a user of an untrusted country is posting something bad.
All assignment in \( q_1 \) with \( \text{usr}(u_2, \text{str}(s_2)) \), by satisfying transition \( \neg \phi_1 \land \phi_2 \land \neg \phi_3 \land \phi_4 \), move to state \( q_2 \). Let us focus on one of such assignments, namely \( \tilde{\eta} = \{ \text{usr}(u_2, \text{str}(s_2)), \text{usr}(u_2, \text{str}(s_2)), \text{usr}(u_2, \text{str}(s_2)), \text{usr}(u_2, \text{str}(s_2)) \} \).
We have to compute the truth value for \( \tilde{\eta} \). From the semantics in Definition \( \eta \) is evaluated to \( \text{false} \). The truth value of \( \Phi \) is the conjunction of the truth value of each assignment, being all its variables universally quantified, hence resulting in \( \text{false} \) as shown by Table \( \eta \).
This result may come unexpected, since there is no constraint forcing users from untrusted countries to post no banned words. However, this is an example of early detection: nothing bad has happened yet, but the reasoning capabilities enabled by the use of temporal logics and automata realize that there exists no prosecution of the current trace that satisfies the constraints. Indeed, our temporal analysis recognizes that the conjunction of the absence constraint (users from untrusted countries cannot delete posts) with the response (post with banned words have to be eventually deleted) and the two safety constraints (users and banned words cannot be deleted from the database), entails that as soon as a user from an untrusted country posts a banned word, the constraints are violated.
Few other remarks are in order. First of all, when the monitor returns \( \text{true} \) or \( \text{false} \), the runtime analysis can be stopped, as from Definition \( \eta \) such truth values will not change regardless of future prosecution of the trace. Also, notice that this approach does not require the whole trace to evaluate a formula, but the current database instance only: data are kept in the automaton as assignments to the formula to be monitored, and in such a way that only data relevant to evaluate this formula are stored. Finally, in many scenarios such as the one described in this example, the finite and fixed domain assumption is not too strict. On the one hand, usually the active domain of the initial database instance \( I_0 \) is taken as the monitoring domain, and it can also be augmented with other business relevant constants. On the other hand, it is the cost to be paid in order to gain in decidability and in efficiency: when we relax this assumption, the monitoring task becomes undecidable, as shown in [1].
6. DISCUSSION
We conclude the analysis of our framework by discussing its applicability to monitoring concrete system traces stored in the XES format [4], and its versatility in modeling dynamic constraints that go beyond data-aware Declare.
6.1 Monitoring XES Traces
Recently, the IEEE Task Force on Process Mining has proposed XES [4] as a standard, XML-based format for representing (process) execution traces. It is the result of a thorough analysis of concrete process-aware information systems and the kind of information they log.
A XES log is minimally conceived as consisting of a set of traces (i.e., specific process instances), which, in turn,
are described by sequences of events. Each of these three concepts can be further described by an arbitrary set of attributes that maintain the actual data. To attach specific semantics to data in an event log, XES introduces the concept of extension. An extension defines a number of standardized attributes for each level in the hierarchy (e.g., log, trace, attributes), together with their type (e.g., string, boolean) and their specific attribute keys. For example, the XML snippet
```xml
<event>
<string key="concept:name" value="pay_order"/>
<date key="time:timestamp" value="..."/>
<string key="org:resource" value="john"/>
<string key="Order ID" value="123"/>
</event>
```
models an event attesting that John has paid the order identified by code 123. To do so, it exploits three of the pre-defined XES extensions: 1. concept extension, so as to include the name of the executed activity; 2. time extension, so as to store the time point in which the event occurred; 3. organizational extension, so as to qualify the name/role/group of the resource that has triggered the event. The main difference between this kind of event and the notion of event used in Section 3.3 is that in the XES snippet there is no information about the facts that are deleted and added by the event. This can however be seamlessly captured in XES by defining a specific DB manipulation extension that adds two children elements to event: a complex attribute toDel to manage tuples that must be deleted, and a complex attribute toAdd to manage tuples that must be added. Such elements, in turn, contain a set of tuple elements, each denoting a tuple, constituted by a relation name and a list of (named) fields (or, in the case of deletion, simply by the primary key of the tuple). For example,
```xml
<event>
<string key="concept:name" value="pay_order"/>
<date key="time:timestamp" value="..."/>
<string key="Order ID" value="123"/>
<tuple relation="CLOSED-ORDERS">
<field key="OID" value="123"/>
<field key="OWNER" value="john"/>
</tuple>
<tuple relation="PENDING-ORDERS" pk="123"/>
</event>
```
extends the aforementioned XES event by stating that the event has the effect of moving order 123 from table PENDING-ORDERS to table CLOSED-ORDERS, also inserting the information about the owner (John).
### 6.2 Modeling Extended Constraints
We observe that our approach is versatile enough to seamlessly capture dynamic constraints that go beyond those addressed by Declare. First, notice that the distinction between activities and other relations is, in our framework, only done for modeling convenience, but both aspects are treated uniformly as relations in the data schema. This makes directly possible to model dynamic constraints that single out the expected/forbidden evolutions of data, independently from the actual executed activities. E.g., constraint $\forall c. (\neg Gold(c) \land \exists c_1, c_2, c \neq c_1 \land c \neq c_2 \land c_1 \neq c_2 \land \text{Responsible}(c, c_1) \land \text{Responsible}(c, c_2))$ is a data-driven precedence stating that customers can become gold only if they become responsible of at least two other customers.
A second important extension concerns the possibility of tackling cross-instance constraints, i.e., dynamic constraints spanning multiple process instances. From the logging point of view, this requires to see the entire system log as a unique execution trace, and at the same time to attach an explicit instance identifier to each logged events, so as to enable correlation among events of the same process instance. There are various techniques to track this kind of information, such as WS-Addressing in the context of web service interaction. From the language point of view, it is possible to model constraints that are applied on a per-instance basis (by just correlating on the process instance identifier), or constraints that possibly cross multiple instances. E.g., $\text{response}(\text{Close_order}(\text{pid}, c, o_{ID})), \text{Pay_order}(\text{pid}, c', o_{ID}) \land (c' = c \lor \text{Responsible}(c', c))$ reconstructs the response constraint modeled in Section 5.3 by correlating on the instance identifier pid, whereas $\text{not response}(\text{Block}((\text{pid}, c, a) \land \text{Admin}(a)), \text{Open_order}((\text{pid}, o, c)))$ models a cross-instance negation response stating that whenever a customer is blocked by a certain administrator, she cannot open orders anymore (notice the correlation on the customer variable c, but not on the process identifiers, which can be possibly different). A similar approach can be used to model activity-based correlation, and in turn support non-atomic activities in the style of [30].
### 7. CONCLUSIONS
The framework presented in this paper represents the first attempt of exploiting an automata-based approach for the runtime monitoring of process execution traces against dynamic, first-order constraints, which are able to accommodate a plethora of different templates, including all those of the Declare language, extended with data-related aspects. The framework currently assumes a finite, rigid quantification domain for constraints, which is parsimoniously handled by the monitoring technique. We are currently studying an extension for dealing with varying domains. At the same time, an implementation of our technique is currently under testing. It has been developed as an extension of the automata-based approach for standard Declare, in the form of an operational support provider inside the well-known ProM process mining framework.
For the time being, the monitoring technique supports all the fine-grained truth values of RV-LTL. The next step is to extend it to provide continuous support (to provide verification capabilities even after a violation has taken place) and advanced diagnostics, starting from the LTL-based approach in [23] and lifting it to the case of FOLTL.
### 8. REFERENCES
|
{"Source-Url": "https://demasellis.000webhostapp.com/pages/contents/publications-pdf/ICSSP-14.pdf", "len_cl100k_base": 14131, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 40805, "total-output-tokens": 14832, "length": "2e13", "weborganizer": {"__label__adult": 0.0003924369812011719, "__label__art_design": 0.0005807876586914062, "__label__crime_law": 0.0008563995361328125, "__label__education_jobs": 0.00102996826171875, "__label__entertainment": 0.0001081228256225586, "__label__fashion_beauty": 0.00019991397857666016, "__label__finance_business": 0.000804901123046875, "__label__food_dining": 0.000457763671875, "__label__games": 0.0009832382202148438, "__label__hardware": 0.001071929931640625, "__label__health": 0.0006966590881347656, "__label__history": 0.0003707408905029297, "__label__home_hobbies": 0.00015425682067871094, "__label__industrial": 0.0009136199951171876, "__label__literature": 0.000461578369140625, "__label__politics": 0.0004799365997314453, "__label__religion": 0.0004360675811767578, "__label__science_tech": 0.1783447265625, "__label__social_life": 0.0001024007797241211, "__label__software": 0.01715087890625, "__label__software_dev": 0.79296875, "__label__sports_fitness": 0.0002663135528564453, "__label__transportation": 0.0007848739624023438, "__label__travel": 0.00019466876983642575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58263, 0.01436]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58263, 0.37857]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58263, 0.88297]], "google_gemma-3-12b-it_contains_pii": [[0, 4643, false], [4643, 11802, null], [11802, 18975, null], [18975, 26860, null], [26860, 33054, null], [33054, 40254, null], [40254, 46211, null], [46211, 52009, null], [52009, 58263, null], [58263, 58263, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4643, true], [4643, 11802, null], [11802, 18975, null], [18975, 26860, null], [26860, 33054, null], [33054, 40254, null], [40254, 46211, null], [46211, 52009, null], [52009, 58263, null], [58263, 58263, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58263, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58263, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58263, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58263, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58263, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58263, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58263, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58263, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58263, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58263, null]], "pdf_page_numbers": [[0, 4643, 1], [4643, 11802, 2], [11802, 18975, 3], [18975, 26860, 4], [26860, 33054, 5], [33054, 40254, 6], [40254, 46211, 7], [46211, 52009, 8], [52009, 58263, 9], [58263, 58263, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58263, 0.10317]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
897447eb5467086128e2610d88c11d6076d9e5c6
|
[REMOVED]
|
{"len_cl100k_base": 11899, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 46740, "total-output-tokens": 14830, "length": "2e13", "weborganizer": {"__label__adult": 0.0002841949462890625, "__label__art_design": 0.0003736019134521485, "__label__crime_law": 0.00027942657470703125, "__label__education_jobs": 0.0007891654968261719, "__label__entertainment": 7.015466690063477e-05, "__label__fashion_beauty": 0.00013959407806396484, "__label__finance_business": 0.00024390220642089844, "__label__food_dining": 0.00032782554626464844, "__label__games": 0.0006537437438964844, "__label__hardware": 0.0014219284057617188, "__label__health": 0.0004775524139404297, "__label__history": 0.0003020763397216797, "__label__home_hobbies": 0.00011783838272094728, "__label__industrial": 0.00055694580078125, "__label__literature": 0.0002589225769042969, "__label__politics": 0.0002510547637939453, "__label__religion": 0.0005135536193847656, "__label__science_tech": 0.06378173828125, "__label__social_life": 7.295608520507812e-05, "__label__software": 0.0086517333984375, "__label__software_dev": 0.91943359375, "__label__sports_fitness": 0.00031447410583496094, "__label__transportation": 0.00066375732421875, "__label__travel": 0.0002294778823852539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62186, 0.02474]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62186, 0.54022]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62186, 0.9095]], "google_gemma-3-12b-it_contains_pii": [[0, 2790, false], [2790, 6484, null], [6484, 9890, null], [9890, 13340, null], [13340, 16748, null], [16748, 18792, null], [18792, 22474, null], [22474, 23066, null], [23066, 25901, null], [25901, 28494, null], [28494, 30320, null], [30320, 33799, null], [33799, 35303, null], [35303, 38389, null], [38389, 41206, null], [41206, 44498, null], [44498, 47246, null], [47246, 50102, null], [50102, 52247, null], [52247, 53990, null], [53990, 57152, null], [57152, 60515, null], [60515, 62186, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2790, true], [2790, 6484, null], [6484, 9890, null], [9890, 13340, null], [13340, 16748, null], [16748, 18792, null], [18792, 22474, null], [22474, 23066, null], [23066, 25901, null], [25901, 28494, null], [28494, 30320, null], [30320, 33799, null], [33799, 35303, null], [35303, 38389, null], [38389, 41206, null], [41206, 44498, null], [44498, 47246, null], [47246, 50102, null], [50102, 52247, null], [52247, 53990, null], [53990, 57152, null], [57152, 60515, null], [60515, 62186, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62186, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62186, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62186, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62186, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62186, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62186, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62186, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62186, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62186, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62186, null]], "pdf_page_numbers": [[0, 2790, 1], [2790, 6484, 2], [6484, 9890, 3], [9890, 13340, 4], [13340, 16748, 5], [16748, 18792, 6], [18792, 22474, 7], [22474, 23066, 8], [23066, 25901, 9], [25901, 28494, 10], [28494, 30320, 11], [30320, 33799, 12], [33799, 35303, 13], [35303, 38389, 14], [38389, 41206, 15], [41206, 44498, 16], [44498, 47246, 17], [47246, 50102, 18], [50102, 52247, 19], [52247, 53990, 20], [53990, 57152, 21], [57152, 60515, 22], [60515, 62186, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62186, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
3a27a00a33b55e6db1df013ecff33511a7269da8
|
[REMOVED]
|
{"len_cl100k_base": 11721, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 44988, "total-output-tokens": 13059, "length": "2e13", "weborganizer": {"__label__adult": 0.0003764629364013672, "__label__art_design": 0.00032591819763183594, "__label__crime_law": 0.0002598762512207031, "__label__education_jobs": 0.0006103515625, "__label__entertainment": 3.8743019104003906e-05, "__label__fashion_beauty": 0.00014734268188476562, "__label__finance_business": 0.0001220107078552246, "__label__food_dining": 0.0003037452697753906, "__label__games": 0.0003669261932373047, "__label__hardware": 0.0005221366882324219, "__label__health": 0.00032067298889160156, "__label__history": 0.00017273426055908203, "__label__home_hobbies": 8.231401443481445e-05, "__label__industrial": 0.00027060508728027344, "__label__literature": 0.0001806020736694336, "__label__politics": 0.00019884109497070312, "__label__religion": 0.00045108795166015625, "__label__science_tech": 0.0017271041870117188, "__label__social_life": 7.706880569458008e-05, "__label__software": 0.0030117034912109375, "__label__software_dev": 0.98974609375, "__label__sports_fitness": 0.0003142356872558594, "__label__transportation": 0.0004010200500488281, "__label__travel": 0.00021398067474365232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61732, 0.00684]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61732, 0.70389]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61732, 0.92761]], "google_gemma-3-12b-it_contains_pii": [[0, 2393, false], [2393, 5558, null], [5558, 8528, null], [8528, 11563, null], [11563, 14735, null], [14735, 18069, null], [18069, 21001, null], [21001, 23104, null], [23104, 24861, null], [24861, 27985, null], [27985, 29913, null], [29913, 33066, null], [33066, 35543, null], [35543, 36666, null], [36666, 39001, null], [39001, 41894, null], [41894, 43383, null], [43383, 46327, null], [46327, 48316, null], [48316, 49434, null], [49434, 52725, null], [52725, 56023, null], [56023, 59086, null], [59086, 61732, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2393, true], [2393, 5558, null], [5558, 8528, null], [8528, 11563, null], [11563, 14735, null], [14735, 18069, null], [18069, 21001, null], [21001, 23104, null], [23104, 24861, null], [24861, 27985, null], [27985, 29913, null], [29913, 33066, null], [33066, 35543, null], [35543, 36666, null], [36666, 39001, null], [39001, 41894, null], [41894, 43383, null], [43383, 46327, null], [46327, 48316, null], [48316, 49434, null], [49434, 52725, null], [52725, 56023, null], [56023, 59086, null], [59086, 61732, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61732, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61732, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61732, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61732, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61732, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61732, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61732, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61732, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61732, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61732, null]], "pdf_page_numbers": [[0, 2393, 1], [2393, 5558, 2], [5558, 8528, 3], [8528, 11563, 4], [11563, 14735, 5], [14735, 18069, 6], [18069, 21001, 7], [21001, 23104, 8], [23104, 24861, 9], [24861, 27985, 10], [27985, 29913, 11], [29913, 33066, 12], [33066, 35543, 13], [35543, 36666, 14], [36666, 39001, 15], [39001, 41894, 16], [41894, 43383, 17], [43383, 46327, 18], [46327, 48316, 19], [48316, 49434, 20], [49434, 52725, 21], [52725, 56023, 22], [56023, 59086, 23], [59086, 61732, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61732, 0.03738]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
65fd420e472dd2605516394631a9105a77347158
|
Tango: Distributed Data Structures over a Shared Log
Mahesh Balakrishnan*, Dahlia Malkhi*, Ted Wobber*, Ming Wu‡, Vijayan Prabhakaran*, Michael Wei§, John D. Davis*, Sriram Rao†, Tao Zou¶, Aviad Zuck∥
*Microsoft Research Silicon Valley ‡Microsoft Research Asia †Microsoft §University of California, San Diego ¶Cornell University ∥Tel-Aviv University
Abstract
Distributed systems are easier to build than ever with the emergence of new, data-centric abstractions for storing and computing over massive datasets. However, similar abstractions do not exist for storing and accessing metadata. To fill this gap, Tango provides developers with the abstraction of a replicated, in-memory data structure (such as a map or a tree) backed by a shared log. Tango objects are easy to build and use, replicating state via simple append and read operations on the shared log instead of complex distributed protocols; in the process, they obtain properties such as linearizability, persistence and high availability from the shared log. Tango also leverages the shared log to enable fast transactions across different objects, allowing applications to partition state across machines and scale to the limits of the underlying log without sacrificing consistency.
1 Introduction
Cloud platforms have democratized the development of scalable applications in recent years by providing simple, data-centric interfaces for partitioned storage (such as Amazon S3 [1] or Azure Blob Store [8]) and parallelizable computation (such as MapReduce [19] and Dryad [28]). Developers can use these abstractions to easily build certain classes of large-scale applications – such as user-facing Internet services or back-end machine learning algorithms – without having to reason about the underlying distributed machinery.
However, current cloud platforms provide applications with little support for storing and accessing metadata. Application metadata typically exists in the form of data structures such as maps, trees, counters, queues, or graphs; real-world examples include filesystem hierarchies [5], resource allocation tables [7], job assignments [3], network topologies [35], deduplication indices [20] and provenance graphs [36]. Updates to metadata usually consist of multi-operation transactions that span different data structures – or arbitrary subsets of a single data structure – while requiring atomicity and isolation; for example, moving a node from a free list to an allocation table, or moving a file from one portion of a namespace to another. At the same time, application metadata is required to be highly available and persistent in the face of faults.
Existing solutions for storing metadata do not provide transactional access to arbitrary data structures with persistence and high availability. Cloud storage services (e.g., SimpleDB [2]) and coordination services (e.g., ZooKeeper [27] and Chubby [14]) provide persistence, high availability, and strong consistency. However, each system does so for a specific data structure, and with limited or no support for transactions that span multiple operations, items, or data structures. Conventional databases support transactions, but with limited scalability and not over arbitrary data structures.
In this paper, we introduce Tango, a system for building highly available metadata services where the key abstraction is a Tango object, a class of in-memory data structures built over a durable, fault-tolerant shared log. As shown in Figure 1, the state of a Tango object exists in two forms: a history, which is an ordered sequence of updates stored durably in the shared log, and any number of views, which are full or partial copies of the data structure in its conventional form – such as a tree or a map – stored in RAM on clients (i.e., application servers). In Tango, the shared log is the object; views constitute soft state and are instantiated, reconstructed, and updated on clients as required by playing the shared history forward. A client modifies a Tango object by appending a new
Tango objects simplify the construction of metadata services by delegating key pieces of functionality to the underlying shared log. First, the shared log is the source of consistency for the Tango object: clients implement state machine replication by funneling writes through the shared history and synchronizing with it on reads, providing linearizability for single operations. Second, the shared log is the source of durability: clients can recover views after crashes simply by playing back the history stored in the shared log. In addition, views can contain pointers to data stored in the shared log, effectively acting as indices over log-structured storage. Third, the shared log is the source of history: clients can access previous states of the Tango object by instantiating a new view from a prefix of the history. Finally, the shared log enables elasticity: the aggregate throughput of linearizable reads to the Tango object can be scaled simply by adding new views, without slowing down write throughput. In extracting these properties from a shared log, Tango objects can be viewed as a synthesis of state machine replication and log-structured storage.
In addition, Tango provides atomicity and isolation for transactions across different objects by storing them on a single shared log. These objects can be different components of the same application (e.g., a scheduler using a free list and an allocation table), different shards of the application (e.g., multiple subtrees of a filesystem namespace), or even components shared across applications (e.g., different job schedulers accessing the same free list). In all these use cases, the shared log establishes a global order across all updates, efficiently enabling transactions as well as other strongly consistent multi-object operations such as atomic updates, consistent snapshots, coordinated rollback, and consistent remote mirroring. Tango implements these operations by manipulating the shared log in simple ways, obviating the need for the complex distributed protocols typically associated with such functionality. Multiplexing Tango objects on a single shared log also simplifies deployment; new applications can be instantiated just by running new client-side code against the shared log, without requiring application-specific back-end servers.
The Tango design is enabled by the existence of fast, decentralized shared log implementations that can scale to millions of appends and reads per second; our implementation runs over a modified version of CORFU [10], a recently proposed protocol that utilizes a cluster of flash drives for this purpose. However, a key challenge for Tango is the playback bottleneck: even with an infinitely scalable shared log, any single client in the system can only consume the log – i.e., learn the total ordering – at the speed of its local NIC. In other words, a set of clients can extend Tango object histories at aggregate speeds of millions of appends per second, but each client can only read back and apply those updates to its local views at tens of thousands of operations per second.
To tackle the playback bottleneck, Tango implements a stream abstraction over the shared log. A stream provides a readnext interface over the address space of the shared log, allowing clients to selectively learn or consume the subsequence of updates that concern them while skipping over those that do not. Each Tango object is stored on its own stream; to instantiate the view for a Tango object, a client simply plays the associated stream. The result is layered partitioning, where an application can shard its state into multiple Tango objects, each instantiated on a different client, allowing the aggregate throughput of the system to scale until the underlying shared log is saturated. The global ordering imposed by the shared log enables fast cross-partition transactions, ensuring that the scaling benefit of layered partitioning does not come at the cost of consistency.
Tango is built in C++ with bindings for Java and C# applications. We’ve built a number of useful data structures with Tango, including ZooKeeper (TangoZK, 1K lines), BookKeeper (TangoBK, 300 lines), and implementations of the Java and C# Collections interfaces such as TreeSets and HashMaps (100 to 300 lines each). Our implementations of these interfaces are persistent, highly available, and elastic, providing linear scaling for linearizable reads against a fixed write load. Additionally, they support fast transactions within and across data structures; for example, applications can transactionally delete a TangoZK node while creating an entry in a TangoMap. Finally, we ran the HDFS namenode over the
Tango variants of ZooKeeper and BookKeeper, showing that our implementations offer full fidelity to the original despite requiring an order of magnitude less code.
We make two contributions in this paper:
- We describe Tango, a system that provides applications with the novel abstraction of an in-memory data structure backed by a shared log. We show that Tango objects can achieve properties such as persistence, strong consistency, and high availability in tens of lines of code via the shared log, without requiring complex distributed protocols (Section 3). In our evaluation, a single Tango object running on 18 clients provides 180K linearizable reads/sec against a 10K/sec write load.
- We show that storing multiple Tango objects on the same shared log enables simple, efficient transactional techniques across objects (Section 4). To implement these techniques efficiently, we present a streaming interface that allows each object to selectively consume a subsequence of the shared log (Section 5). In our evaluation, a set of Tango objects runs at over 100K txes/sec when 16% of transactions span objects on different clients.
2 Background
In practice, metadata services are often implemented as centralized servers; high availability is typically a secondary goal to functionality. When the time comes to ‘harden’ these services, developers are faced with three choices. First, they can roll out their own custom fault-tolerance protocols; this is expensive, time-consuming, and difficult to get right. Second, they can implement state machine replication over a consensus protocol such as Paxos [31]; however, this requires the service to be structured as a state machine with all updates flowing through the Paxos engine, which often requires a drastic rewrite of the code.
The third option is to use an existing highly available data structure such as ZooKeeper, which provides a hierarchical namespace as an external service. However, such an approach forces developers to use a particular data structure (in the case of ZooKeeper, a tree) to store all critical application state, instead of allowing them to choose one or more data structures that best fit their needs (as an analogy, imagine if the C++ STL provided just a hash_map, or Java Collections came with just a TreeSet!). This is particularly problematic for high-performance metadata services which use highly tuned data structures tailored for specific workloads. For example, a membership service that stores server names in ZooKeeper would find it inefficient to implement common functionality such as searching the namespace on some index (e.g., CPU load), extracting the oldest/newest inserted name, or storing multi-MB logs per name.
In practice, developers are forced to cobble together various services, each solving part of the problem; for example, one of the existing, in-progress proposals for adding high availability to the HDFS namenode (i.e., metadata server) uses a combination of ZooKeeper, BookKeeper [30], and its own custom protocols [4]. Such an approach produces fragile systems that depend on multiple other systems, each with its own complex protocols and idiosyncratic failure modes. Often these underlying protocols are repetitive, re-implementing consensus and persistence in slightly different ways. The result is also a deployment nightmare, requiring multiple distributed systems to be independently configured and provisioned.
Can we provide developers with a wide range of data structures that are strongly consistent, persistent, and highly available, while using a single underlying abstraction? Importantly, can we make the development of such a data structure easy enough that developers can write new, application-specific data structures in tens of lines of code? The answer to these questions lies in the shared log abstraction.
2.1 The Shared Log Abstraction
Shared logs were first used in QuickSilver [25, 41] and Camelot [43] in the late 80s to implement fault-tolerance; since then, they have played roles such as packet recovery [26] and remote mirroring [29] in various distributed systems. Two problems have hampered the adoption of the shared log as a mainstream abstraction. First, any shared log implementation is subject to a highly random read workload, since the body of the log can be concurrently accessed by many clients over the network. If the underlying storage media is disk, these randomized reads can slow down other reads as well as reduce the append throughput of the log to a trickle. As Bernstein et al. observe [11], this concern has largely vanished with the advent of flash drives that can support thousands of concurrent read and write IOPS.
The second problem with the shared log abstraction relates to scalability; existing implementations typically require append to the log to be serialized through a primary server, effectively limiting the append throughput of the log to the I/O bandwidth of a single machine. This problem is eliminated by the CORFU protocol [10], which scales the append throughput of the log to the speed at which a centralized sequencer can hand out new offsets in the log to clients.
The sequencer is not a single point of failure. The sequencer stores a small amount of soft state: a single 64-bit integer representing the tail of the log. When the sequencer goes down, any client can easily recover this state using the slow check operation. In addition, the sequencer is merely an optimization to find the tail of the log and not required for correctness; the Chain Replication variant used to write to the storage nodes guarantees that a single client will ‘win’ if multiple clients attempt to write to the same offset. As a result, the system can tolerate the existence of multiple sequencers, and can run without a sequencer (at much reduced throughput) by having clients probe for the location of the tail. A different failure mode involves clients crashing after obtaining offsets but before writing to the storage nodes, creating ‘holes’ in the log; to handle this case, CORFU provides applications with a fast, sub-millisecond fill primitive as described in [10].
The sequencer is not a bottleneck for small clusters. In prior work on CORFU [10], we reported a user-space sequencer that ran at 200K appends/sec. To test the limits of the design, we subsequently built a faster CORFU sequencer using the new Registered I/O interfaces [9] in Windows Server 2012. Figure 2 shows the performance of the new sequencer: as we add clients to the system, sequencer throughput increases until it plateaus at around 570K requests/sec. We obtain this performance without any batching (beyond TCP/IP’s default Nagling); with a batch size of 4, for example, the sequencer can run at over 2M requests/sec, but this will obviously affect the end-to-end latency of appends to the shared log. Our finding that a centralized server can be made to run at very high RPC rates matches recent observations by others; the Percolator system [38], for example, runs a centralized timestamp oracle with similar functionality at over 2M requests/sec with batching; Vasudevan et al. [46] report achieving 1.6M sub-millisecond 4-byte reads/sec on a single server with batching; Masstree [33] is a key-value server that provides 6M queries/sec with batching.
Garbage collection is a red herring. System designers tend to view log-structured designs with suspicion, conditioned by decades of experience with garbage collection over hard drives. However, flash storage has sparked a recent resurgence in log-structured designs, due to the ability of the medium to provide contention-free random reads (and its need for sequential writes); every SSD on the market today traces its lineage to the original LFS design [39], implementing a log-structured storage system that can provide thousands of IOPS despite concurrent GC activity. In this context, a single CORFU storage node is an SSD with a custom interface (i.e., a write-once, 64-bit address space instead of a conventional LBA, where space is freed by explicit trims rather than address space, mirrored across the replica set. Additionally, the cluster contains a dedicated sequencer node, which is essentially a networked counter storing the current tail of the shared log.
To append to the shared log, a client first contacts the sequencer and obtains the next free offset in the global address space of the shared log. It then maps this offset to a local offset on one of the replica sets using a simple deterministic mapping over the membership of the cluster. For example, offset 0 might be mapped to $A : 0$ (i.e., page 0 on set $A$, which in turn consists of storage nodes $A_0, A_1,$ and $A_2$), offset 1 to $B : 0$, and so on until the function wraps back to $A : 1$. The client then completes the append by directly issuing writes to the storage nodes in the replica set using a client-driven variant of Chain Replication [45].
Reads to an offset follow a similar process, minus the offset acquisition from the sequencer. Checking the tail of the log comes in two variants: a fast check (sub-millisecond) that contacts the sequencer, and a slow check (10s of milliseconds) that queries the storage nodes for their local tails and inverts the mapping function to obtain the global tail.
The CORFU design has some important properties:
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
\[\text{ performance }\]
than overwrites). Accordingly, its performance and endurance levels are similar to any commodity SSD. As with any commodity SSD, the flash lifetime of a CORFU node depends on the workload; sequential trims result in substantially less wear on the flash than random trims.
Finally, while the CORFU prototype we use works over commodity SSDs, the abstract design can work over any form of non-volatile RAM (including battery-backed DRAM and Phase Change Memory). The size of a single entry in the log (which has to be constant across entries) can be selected at deployment time to suit the underlying medium (e.g., 128 bytes for DRAM, or 4KB for NAND flash).
### 3 The Tango Architecture
A Tango application is typically a service running in a cloud environment as a part of a larger distributed system, managing metadata such as indices, namespaces, membership, locks, or resource lists. Application code executes on clients (which are compute nodes or application servers in a data center) and manipulates data stored in Tango objects, typically in response to networked requests from machines belonging to other services and subsystems. The local view of the object on each client interacts with a Tango runtime, which in turn provides persistence and consistency by issuing appends and reads to an underlying shared log. Importantly, Tango runtimes on different machines do not communicate with each other directly through message-passing; all interaction occurs via the shared log. Applications can use a standard set of objects provided by Tango, providing interfaces similar to the Java Collections Library or the C++ STL; alternatively, application developers can roll their own Tango objects.
#### 3.1 Anatomy of a Tango Object
As described earlier, a Tango object is a replicated in-memory data structure backed by a shared log. The Tango runtime simplifies the construction of such an object by providing the following API:
- **update_helper**: this accepts an opaque buffer from the object and appends it to the shared log.
- **query_helper**: this reads new entries from the shared log and provides them to the object via an apply upcall.
The code for the Tango object itself has three main components. First, it contains the view, which is an in-memory representation of the object in some form, such as a list or a map; in the example of a TangoRegister shown in Figure 3, this state is a single integer. Second, it implements the mandatory apply upcall which changes the view when the Tango runtime calls it with new entries from the log. The view must be modified only by the Tango runtime via this apply upcall, and not by application threads executing arbitrary methods of the object.
Finally, each object exposes an external interface consisting of object-specific mutator and accessor methods; for example, a TangoMap might expose get/put methods, while the TangoRegister in Figure 3 exposes read/write methods. The object’s mutators do not directly change the in-memory state of the object, nor do the accessors immediately read its state. Instead, each mutator coalesces its parameters into an opaque buffer – an update record – and calls the update_helper function of the Tango runtime, which appends it to the shared log. Each accessor first calls query_helper before returning an arbitrary function over the state of the object; within the Tango runtime, query_helper plays new update records in the shared log until its current tail and applies them to the object via the apply upcall before returning.
We now explain how this simple design extracts important properties from the underlying shared log.
**Consistency**: Based on our description thus far, a Tango object is indistinguishable from a conventional SMR (state machine replication) object. As in SMR, different views of the object derive consistency by funneling all updates through a total ordering engine (in our case, the shared log). As in SMR, strongly consistent accessors are implemented by first placing a marker at the current position in the total order and then ensuring that the view has seen all updates until that marker. In conventional SMR this is usually done by injecting a read
```cpp
class TangoRegister {
int oid;
TangoRuntime *T;
int state;
void apply(void *X) {
state = *(int *)X;
}
void writeRegister(int newstate) {
T->update_helper(&newstate, sizeof(int), oid);
}
int readRegister() {
T->query_helper(oid);
return state;
}
}
```
Figure 3: TangoRegister: a linearizable, highly available and persistent register. Tango functions/upcalls in bold.
operation into the total order, or by directing the read request through the leader [13]; in our case we leverage the check function of the log. Accordingly, a Tango object with multiple views on different machines provides linearizable semantics for invocations of its mutators and accessors.
**Durability:** A Tango object is trivially persistent; the state of the object can be reconstructed by simply creating a new instance and calling query_helper on Tango. A more subtle point is that the in-memory data structure of the object can contain pointers to values stored in the shared log, effectively turning the data structure into an index over log-structured storage. To facilitate this, each Tango object is given direct, read-only access to its underlying shared log, and the apply upcall optionally provides the offset in the log of the new update. For example, a TangoMap can update its internal hashmap with the offset rather than the value on each apply upcall; on a subsequent get, it can consult the hash-map to locate the offset and then directly issue a random read to the shared log.
**History:** Since all updates are stored in the shared log, the state of the object can be rolled back to any point in its history simply by creating a new instance and syncing with the appropriate prefix of the log. To enable this, the Tango query_helper interface takes an optional parameter that specifies the offset at which to stop syncing. To optimize this process in cases where the view is small (e.g., a single integer in TangoRegister), the Tango object can create checkpoints and provide them to Tango via a checkpoint call. Internally, Tango stores these checkpoints on the shared log and accesses them when required on query_helper calls. Additionally, the object can forgo the ability to roll back (or index into the log) before a checkpoint with a forget call, which allows Tango to trim the log and reclaim storage capacity.
The Tango design enables other useful properties. Strongly consistent reads can be scaled simply by instantiating more views of the object on new clients. More reads translate into more check and read operations on the shared log, and scale linearly until the log is saturated. Additionally, objects with different in-memory data structures can share the same data on the log. For example, a namespace can be represented by different trees, one ordered on the filename and the other on a directory hierarchy, allowing applications to perform two types of queries efficiently (i.e., “list all files starting with the letter B” vs. “list all files in this directory”).
### 3.2 Multiple Objects in Tango
We now substantiate our earlier claim that storing multiple objects on a single shared log enables strongly consistent operations across them without requiring complex distributed protocols. The Tango runtime on each client can multiplex the log across objects by storing and checking a unique object ID (OID) on each entry; such a scheme has the drawback that every client has to play every entry in the shared log. For now, we make the assumption that each client hosts views for all objects in the system. Later in the paper, we describe layered partitioning, which enables strongly consistent operations across objects without requiring each object to be hosted by each client, and without requiring each client to consume the entire shared log.
Many strongly consistent operations that are difficult to achieve in conventional distributed systems are trivial over a shared log. Applications can perform coordinated rollbacks or take consistent snapshots across many objects simply by creating views of each object synced up to the same offset in the shared log. This can be a key capability if a system has to be restored to an earlier state after a cascading corruption event. Another trivially achieved capability is remote mirroring; application state can be asynchronously mirrored to remote data centers by having a process at the remote site play the log and copy its contents. Since log order is maintained, the mirror is guaranteed to represent a consistent, system-wide snapshot of the primary at some point in the past. In Tango, all these operations are implemented via simple appends and reads on the shared log.
Tango goes one step further and leverages the shared log to provide transactions within and across objects. It implements optimistic concurrency control by appending speculative transaction commit records to the shared log. Commit records ensure atomicity, since they determine a point in the persistent total ordering at which the changes that occur in a transaction can be made visible at all clients. To provide isolation, each commit record contains a read set: a list of objects read by the transaction along with their versions, where the version is simply the last offset in the shared log that modified the object. A transaction only succeeds if none of its reads are stale when the commit record is encountered (i.e., the objects have not changed since they were read). As a result, Tango provides the same isolation guarantee as 2-phase locking, which is at least as strong as strict serializability [12], and is identical to the guarantee provided by the recent Spanner [18] system.
Figure 4 shows an example of the transactional interface provided by Tango to application developers, where calls to object accessors and mutators can be bracketed by BeginTX and EndTX calls. BeginTX creates a transaction context in thread-local storage. EndTX appends a commit record to the shared log, plays the log forward until the commit point, and then makes a commit/abort decision. Each client that encounters the commit record decides – independently but deterministically – whether...
it should commit or abort by comparing the versions in the read set with the current versions of the objects. If none of the read objects have changed since they were read, the transaction commits and the objects in the write set are updated with the apply upcall. In the example in Figure 4, this happens if the committed transactions in the conflict window do not modify the ‘owners’ data structure.
To support transactional access, Tango object code requires absolutely no modification (for example, the TangoRegister code in Figure 3 supports transactions); instead, the Tango runtime merely substitutes different implementations for the update_helper and query_helper functions if an operation is running within a transactional context. The update_helper call now buffers updates instead of writing them immediately to the shared log; when a log entry’s worth of updates have been accumulated, it flushes them to the log as speculative writes, not to be made visible by other clients playing the log. Instead, the transaction commits if the read set has not changed in conflict window.
Figure 4: Example of a single-writer list built with transactions over a TangoMap and a TangoList.
Versioning: While using a single version number per object works well for fine-grained objects such as registers or counters, it can result in an unnecessarily high abort rate for large data structures such as maps, trees, or tables, where transactions should ideally be allowed to concurrently modify unrelated parts of the data structure. Accordingly, objects can optionally pass in opaque key parameters to the update_helper and query_helper calls, specifying which disjoint sub-region of the data structure is being accessed and thus allowing for fine-grained versioning within the object. Internally, Tango tracks the latest version of each key within an object. For data structures that are not statically divisible into sub-regions (such as queues or trees), the object can use its own key scheme and provide upcalls to the Tango runtime that are invoked to check and update versions.
Read-only transactions: For these, the EndTX call does not insert a commit record into the shared log; instead, it just plays the log forward until its current tail before making the commit/abort decision. If there’s no write activity in the system (and consequently no new updates to play forward), a read-only transaction only requires checking the tail of the shared log; in CORFU, this is a single round-trip to the sequencer. Tango also supports fast read-only transactions from stale snapshots by having EndTX make the commit/abort decision locally, without interacting with the log. Write-only transactions require an append on the shared log but can commit immediately without playing the log forward.
Failure Handling: A notable aspect of Tango objects (as described thus far) is the simplicity of failure handling, a direct consequence of using a fault-tolerant shared log. A Tango client that crashes in the middle of a transaction can leave behind orphaned data in the log without a corresponding commit record; other clients can complete the transaction by inserting a dummy commit record designed to abort. In the context of CORFU, a crashed Tango client can also leave holes in the shared log: any client that encounters these uses the CORFU fill operation on them after a tunable time-out (100ms by default). Beyond these, the crash of a Tango client has no unpleasant side-effects.
4 Layered Partitions
In the previous section, we showed how a shared log enables strongly consistent operations such as transactions across multiple Tango objects. In our description, we assumed that each client in the system played the
Figure 5: Use cases for Tango objects: application state can be distributed in different ways across objects.
entire shared log, with the Tango runtime multiplexing the updates in the log across different Tango objects. Such a design is adequate if every client in the system hosts a view of every object in the system, which is the case when the application is a large, fully replicated service (as in example (a) in Figure 5). For example, a job scheduling service that runs on multiple application servers for high availability can be constructed using a TangoMap (mapping jobs to compute nodes), a TangoList (storing free compute nodes) and a TangoCounter (for new job IDs). In this case, each of the application servers (i.e. Tango clients) runs a full replica of the job scheduler service and hence needs to access all three objects. Requiring each node to play back the entire log is also adequate if different objects share the same history, as described earlier; in example (b) in Figure 5, a server hosts a tree-based map and a hash-based map over the same data to optimize for specific access patterns.
However, for other use cases, different clients in the system may want to host views of different subsets of objects, due to different services or components sharing common state (see example (c) in Figure 5). For instance, let’s say that the job scheduler above coexists with a backup service that periodically takes nodes in the free list offline, backs them up, and then returns them to the free list. This backup service runs on a different set of application servers and is composed from a different set of Tango objects, but requires access to the same TangoList as the job scheduler. In this scenario, forcing each application server to play the entire shared log is wasteful; the backup service does not care about the state of the objects that compose the job scheduler (other than the free list), and vice versa. Additionally, different clients may want to host views of disjoint subsets of objects (as in example (d) in Figure 5), scaling the system for operations within a partition while still using the underlying shared log for consistency across partitions.
We call this **layered partitioning**: each client hosts a (possibly overlapping) partition of the global state of the system, but this partitioning scheme is layered over a single shared log. To efficiently implement layered partitions without requiring each client to play the entire shared log, Tango maps each object to a stream over the shared log. A stream augments the conventional shared log interface (**append** and random **read**) with a streaming **readnext** call. Many streams can co-exist on a single shared log; calling **readnext** on a stream returns the next entry belonging to that stream in the shared log, skipping over entries belonging to other streams. With this interface, clients can selectively consume the shared log by playing the streams of interest to them (i.e., the streams of objects hosted by them). Importantly, streams are not necessarily disjoint; a **multiappend** call allows a physical entry in the log to belong to multiple streams, a capability we use to implement transactions across objects.
Accordingly, each client now plays the streams belonging to the objects in its layered partition. How does this compare with conventionally partitioned or sharded systems? As in sharding, each client hosting a layered partition only sees a fraction of the traffic in the system, allowing throughput to scale linearly with the number of partitions (assuming these don’t overlap). Unlike sharding, applications now have the ability to efficiently perform strongly consistent operations such as transactions across layered partitions, since the shared log imposes a global ordering across partitions. In exchange for this new capability, there’s now a cap on aggregate throughput across all partitions; once the shared log is saturated, adding more layered partitions does not increase throughput.
In the remainder of this section, we describe how Tango uses streams over a shared log to enable fast transactions without requiring all clients to play the entire log. In the next section, we describe our implementation of streams within the CORFU shared log.
### 4.1 Transactions over Streams
Tango uses streams in an obvious way: each Tango object is assigned its own dedicated stream. If transactions never cross object boundaries, no further changes are required to the Tango runtime. When transactions cross object boundaries, Tango changes the behavior of its **EndTX** call to **multiappend** the commit record to all the streams involved in the write set. This scheme ensures two important properties required for atomicity and isolation. First, a transaction that affects multiple objects...
occupies a single position in the global ordering; in other
words, there is only one commit record per transaction
in the raw shared log. Second, a client hosting an object
sees every transaction that impacts the object, even if it
hosts no other objects.
When a commit record is appended to multiple
streams, each Tango runtime can encounter it multiple
times, once in each stream it plays (under the hood, the
streaming layer fetches the entry once from the shared
log and caches it). The first time it encounters the record
at a position X, it plays all the streams involved until po-
tion X, ensuring that it has a consistent snapshot of all
the objects touched by the transaction as of X. It then
checks for read conflicts (as in the single-object case)
and determines the commit/abort decision.
When each client does not host a view for every object
in the system, writes or reads can involve objects that are
not locally hosted at the client that generates the commit
record or the client that encounters it. We examine each
of these cases:
A. Remote writes at the generating client: The gener-
ating client – i.e., the client that executed the transaction
and created the commit record – may want to write to a
remote object (i.e., an object for which it does not host a
local view). This case is easy to handle: as we describe
later, a client does not need to play a stream to append
to it, and hence the generating client can simply append
the commit record to the stream of the remote object.
B. Remote writes at the consuming client: A client
may encounter commit records generated by other clients
that involve writes to objects it does not host; in
this case, it simply updates its local objects while ignor-
ing updates to the remote objects.
Remote-write transactions are an important capabil-
ity. Applications that partition their state across multi-
ple objects can now consistently move items from one
partition to the other. For example, in our implemen-
tation of ZooKeeper as a Tango object, we can partition
the namespace by running multiple instances of the ob-
ject, and move keys from one namespace to the other
using remote-write transactions. Another example is a
producer-consumer queue; with remote-write transac-
tions, the producer can add new items to the queue with-
out having to locally host it and see all its updates.
C. Remote reads at the consuming client: Here, a
client encounters commit records generated by other clients
that involve reads to objects it does not host; in
this case, it does not have the information required to
make a commit/abort decision since it has no local copy
of the object to check the read version against. To re-
solve this problem, we add an extra round to the con-
flict resolution process, as shown in Figure 6. The client
that generates and appends the commit record (App1 in
the figure) immediately plays the log forward until the
commit point, makes a commit/abort decision for the
record it just appended, and then appends an extra de-
cision record to the same set of streams. Other clients
that encounter the commit record (App2 in the figure)
but do not have locally hosted copies of the objects in
its read set can now wait for this decision record to ar-
vive. Significantly, the extra phase adds latency to the
transaction but does not increase the abort rate, since the
conflict window for the transaction is still the span in the
shared log between the reads and the commit record.
Concretely, a client executing a transaction must in-
sert a decision record for a transaction if there’s some
other client in the system that hosts an object in its write
set but not all the objects in its read set. In our current
implementation, we require developers to mark objects
as requiring decision records; for example, in Figure 6,
App1 marks object A but not object C. This solution
is simple but conservative and static; a more dynamic
scheme might involve tracking the set of objects hosted
by each client.
D. Remote reads at the generating client: Tango does
not currently allow a client to execute transactions and
generate commit records involving remote reads. Call-
ing an accessor on an object that does not have a local
view is problematic, since the data does not exist locally;
possible solutions involve invoking an RPC to a different
client with a view of the object, if one exists, or recreat-
ing the view locally at the beginning of the transaction,
which can be too expensive. If we do issue RPCs to
other clients, conflict resolution becomes problematic;
the node that generated the commit record does not have
local views of the objects read by it and hence cannot
check their latest versions to find conflicts. As a result,
conflict resolution requires a more complex, collabora-
tive protocol involving multiple clients sharing partial,
local commit/abort decisions via the shared log; we plan
to explore this as future work.
A second limitation is that a single transaction can
only write to a fixed number of Tango objects. The multiappend call places a limit on the number of streams to which a single entry can be appended. As we will see in the next section, this limit is set at deployment time and translates to storage overhead within each log entry, with each extra stream requiring 12 bytes of space in a 4KB log entry.
**Failure Handling:** The decision record mechanism described above adds a new failure mode to Tango: a client can crash after appending a commit record but before appending the corresponding decision record. A key point to note, however, is that the extra decision phase is merely an optimization; the shared log already contains all the information required to make the commit/abort decision. Any other client that hosts the read set of the transaction can insert a decision record after a time-out if it encounters an orphaned commit record. If no such client exists and a larger time-out expires, any client in the system can reconstruct local views of each object in the read set synced up to the commit offset and then check for conflicts.
5 **Streaming CORFU**
In this section, we describe our addition of a streaming interface to the CORFU shared log implementation.
As we described in Section 2, CORFU consists of three components: a client-side library that exposes an append/read/check/trim interface to clients; storage servers that each expose a 64-bit, write-once address space over flash storage; and a sequencer that hands out 64-bit counter values.
To implement streaming, we changed the client-side library to allow the creation and playback of streams. Internally, the library stores stream metadata as a linked list of offsets on the address space of the shared log, along with an iterator. When the application calls readnext on a stream, the library issues a conventional CORFU random read to the offset pointed to by the iterator, and moves the iterator forward.
To enable the client-side library to efficiently construct this linked list, each entry in the shared log now has a small stream header. This header includes a stream ID as well as backpointers to the last \(K\) entries in the shared log belonging to the same stream. When the client-side library starts up, the application provides it with the list of stream IDs of interest to it. For each such stream, the library finds the last entry in the shared log belonging to that stream (we’ll shortly describe how it does so efficiently). The \(K\) backpointers in this entry allow it to construct a \(K\)-sized suffix of the linked list of offsets comprising the stream. It then issues a read to the offset pointed at by the \(K\)th backpointer and returns the previous \(K\) offsets in the linked list. In this manner, the library can construct the linked list by striding backward on the log, issuing \(\frac{N}{K}\) reads to build the list for a stream with \(N\) entries. A higher redundancy factor \(K\) for the backpointers translates into a longer stride length and allows for faster construction of the linked list.
By default, the stream header stores the \(K\) backpointers using 2-byte deltas relative to the current offset, which overflow if the distance to the previous entry in the stream is larger than 64K entries. To handle the case where all \(K\) deltas overflow, the header uses an alternative format where it stores \(\frac{K}{4}\) backpointers as 64-bit absolute offsets which can index any location in the shared log’s address space. Each header now has an extra bit that indicates the backpointer format used (relative or absolute), and a list of either \(K\) 2-byte relative backpointers or \(\frac{K}{4}\) 8-byte absolute backpointers. In practice, we use a 31-bit stream ID and use the remaining bit to store the format indicator. If \(K = 4\), which is the minimum required for this scheme, the header uses 12 bytes. To allow each entry to belong to multiple streams, we store a fixed number of such headers on the entry. The number of headers we store is equal to the number of streams the entry can belong to, which in turn translates to the number of objects that a single transaction can write to.
Appending to a set of streams requires the client to acquire a new offset by calling increment on the sequencer (as in conventional CORFU). However, the sequencer now accepts a set of stream IDs in the client’s request, and maintains the last \(K\) offsets it has issued for each stream ID. Using this information, the sequencer returns a set of stream headers in response to the increment request, along with the new offset. Having obtained the new offset, the client-side library prepends the stream headers to the application payload and writes the entry using the conventional CORFU protocol to update the storage nodes. The sequencer also supports an inter-
---
**Figure 7:** Streams are stored on the shared log with redundant backpointers.
face to return this information without incrementing the counter, allowing clients to efficiently find the last entry for a stream on startup or otherwise.
Updating the metadata for a stream at the client-side library (i.e., the linked list of offsets) is similar to populating it on startup; the library contacts the sequencer to find the last entry in the shared log belonging to the stream and backtracks until it finds entries it already knows about. The operation of bringing the linked list for a stream up-to-date can be triggered at various points. It can happen reactively when readnext is called; but this can result in very high latencies for the readnext operation if the application issues reads burstily and infrequently. It can happen proactively on appends, but this is wasteful for applications that append to the stream but never read from it, since the linked list is never consulted in this case and does not have to be kept up-to-date. To avoid second-guessing the application, we add an additional sync call to the modified library which brings the linked list up-to-date and returns the last offset in the list to the application. The application is required to call this sync function before issuing readnext calls to ensure linearizable semantics for the stream, but can also make periodic, proactive sync calls to amortize the cost of keeping the linked list updated.
**Failure Handling:** Our modification of CORFU has a fault-tolerance implication; unlike the original protocol, we can no longer tolerate the existence of multiple sequencers, since this can result in clients obtaining and storing different, conflicting sets of backpointers for the same stream. To ensure that this case cannot occur, we modified reconfiguration in CORFU to include the sequencer as a first-class member of the ‘projection’ or membership view. When the sequencer fails, the system is reconfigured to a new view with a different sequencer, using the same protocol used by CORFU to eject failed storage nodes. Any client attempting to write to a storage node after obtaining an offset from the old sequencer will receive an error message, forcing it to update its view and switch to the new sequencer. In an 18-node deployment, we are able to replace a failed sequencer within 10 ms. Once a new sequencer comes up, it has to reconstruct its backpointer state; in the current implementation, this is done by scanning backward on the shared log, but we plan on expediting this by having the sequencer store periodic checkpoints in the log. The total state at the sequencer is quite manageable; with \( K = 4 \) backpointers per stream, the space required is \( 4 \times 8 \) bytes per stream, or 32MB for 1M streams.
In addition, crashed clients can create holes in the log if they obtain an offset from the sequencer and then fail before writing to the storage units. In conventional CORFU, any client can use the fill call to patch a hole with a junk value. Junk values are problematic for streaming CORFU since they do not contain backpointers. When the client-side library strides through the backpointers to populate or update its metadata for a stream, it has to stop if all \( K \) relative backpointers from a particular offset lead to junk entries (or all \( \frac{K}{2} \) backpointers in the absolute backpointer format). In our current implementation, a client in this situation resorts to scanning the log backwards to find an earlier valid entry for the stream.
### 6 Evaluation
Our experimental testbed consists of 36 8-core machines in two racks, with gigabit NICs on each node and 20 Gbps between the top-of-rack switches. Half the nodes (evenly divided across racks) are equipped with two Intel X25V SSDs each. In all the experiments, we run an 18-node CORFU deployment on these nodes in a 9X2 configuration (i.e., 9 sets of 2 replicas each), such that each entry is mirrored across racks. The CORFU sequencer runs on a powerful, 32-core machine in a separate rack. The other 18 nodes are used as clients in our experiments, running applications and benchmarks that operate on Tango objects; we don’t model the external clients of these applications and instead generate load locally. We use 4KB entries in the CORFU log, with a batch size of 4 at each client (i.e., the Tango runtime stores a batch of 4 commit records in each log entry).
#### 6.1 Single Object Linearizability
We claimed that Tango objects can provide persistence, high availability and elasticity with high performance. To demonstrate this, Figure 8 shows the performance of a single Tango object with a varying number of views, corresponding to the use case in Figure 5(a), where the application uses Tango to persist and replicate state. The code we run is identical to the TangoRegister code in Figure 3. Figure 8 (Left) shows the latency / throughput trade-off on a single view; we can provide 135K sub-millisecond reads/sec on a read-only workload and 38K writes/sec under 2 ms on a write-only workload. Each line on this graph is obtained by doubling the window size of outstanding operations at the client from 8 (left-most point) to 256 (right-most point).
Figure 8 (Middle) shows performance for a ‘primary/backup’ scenario where two nodes host views of the same object, with all writes directed to one node and all reads to the other. Overall throughput falls sharply as writes are introduced, and then stays constant at around 40K ops/sec as the workload mix changes; however, average read latency goes up as writes dominate, reflecting the extra work the read-only ‘backup’ node has to
perform to catch up with the ‘primary’. Note that either node can serve reads or writes, effectively enabling immediate fail-over if one fails. This graph shows that Tango can be used to support a highly available, high-throughput service.
Figure 8 (Right) shows the elasticity of linearizable read throughput; we scale read throughput to a Tango object by adding more read-only views, each of which issues 10K reads/sec, while keeping the write workload constant at 10K writes/sec. Reads scale linearly until the underlying shared log is saturated; to illustrate this point, we show performance on a smaller 2-server log which bottlenecks at around 120K reads/sec, as well as the default 18-server log which scales to 180K reads/sec with 18 clients. Adding more read-only views does not impact read latency; with the 18-server log, we obtain 1 ms reads (corresponding to the point on the previous graph for a 10K writes/sec workload).
6.2 Transactions
We now show that Tango provides transactional support within and across objects. We first focus on single-object transactions. Figure 9 shows transaction throughput and goodput (i.e., committed transactions) on a single TangoMap object as we vary the degree of contention (by increasing the number of keys within the map) and increase the number of nodes hosting views of the object. Each node performs transactions, and each transaction reads three keys and writes three other keys to the map. Figure 9 (Top) chooses keys using a highly skewed zipf distribution (corresponding to workload ‘a’ of the Yahoo! Cloud Serving Benchmark [17]). Figure 9 (Bottom) chooses keys using a uniform distribution. For 3 nodes, transaction goodput is low with tens or hundreds of keys but reaches 99% of throughput in the uniform case and 70% in the zipf case with 10K keys or higher. Transaction throughput hits a maximum with three nodes and stays constant as more nodes are added; this illustrates the playback bottleneck, where system-wide throughput is limited by the speed at which a single client can play the log. Transaction latency (not shown in the graph) averages at 6.5 ms with 2 nodes and 100K keys. Next, we look at how layered partitioning alleviates the playback bottleneck.
Figure 10 (Left) substantiates our claim that layered partitioning can provide linear scaling until the underlying shared log is saturated. Here, each node hosts the view for a different TangoMap and performs single-object transactions (with three reads and three writes) over it. We use both a small shared log with 6 servers as well as the default 18-server one. As expected, throughput scales linearly with the number of nodes until it satu-
rates the shared log on the 6-server deployment at around 150K txes/sec. With an 18-server shared log, throughput scales to 200K txes/sec and we do not encounter the throughput ceiling imposed by the shared log.
We stated that the underlying shared log enables fast transactions across objects. We now look at two types of transactions across different objects. First, in Figure 10 (Middle), we consider the partitioned setup from the previous experiment with 18 nodes running at 200K txes/sec, where each node hosts a view for a different TangoMap with 100K keys. We introduce cross-partition transactions that read the local object but write to both the local as well as a remote object (this corresponds to an operation that moves a key from one map to another). To provide a comparison point, we modified the Tango runtime’s EndTX call to implement a simple, distributed 2-phase locking (2PL) protocol instead of accessing the shared log; this protocol is similar to that used by Percolator [38], except that it implements serializability instead of snapshot isolation for a more direct comparison with Tango. On EndTX-2PL, a client first acquires a timestamp from a centralized server (corresponding to the Percolator timestamp server; we use our sequencer instead); this is the version of the current transaction. It then locks the items in the read set. If any item has changed since it was read, the transaction is aborted; if not, the client then contacts the other clients in the write set to obtain a lock on each item being modified as well as the latest version of that item. If any of the returned versions are higher than the current transaction’s version (i.e., a write-write conflict) or a lock cannot be obtained, the transaction unlocks all items and retries with a new sequence number. Otherwise, it sends a commit to all the clients involved, updating the items and their versions and unlocking them.
As Figure 10 (Middle) shows, throughput degrades gracefully for both Tango and 2PL as we double the percentage of cross-partition transactions. We don’t show goodput in this graph, which is at around 99% for both protocols with uniform key selection. Our aim is to show that Tango has scaling characteristics similar to a conventional distributed protocol while suffering from none of the fault-tolerance problems endemic to such protocols, such as deadlocks, crashed coordinators, etc.
Next, we look at a different type of transaction in Figure 10 (Right), where each node in a 4-node setup hosts a view of a different TangoMap as in the previous experiment, but also hosts a view for a common TangoMap shared across all the nodes (corresponding to the use case in Figure 5(d)). Each map has 100K keys. For some percentage of transactions, the node reads and writes both its own object as well as the shared object; we double this percentage on the x-axis, and throughput falls sharply going from 0% to 1%, after which it degrades gracefully. Goodput (not shown in the graph) drops marginally from 99% of throughput to 98% of throughput with uniform key selection.
6.3 Other Data Structures
To show that Tango can support arbitrary, real-world data structures, we implemented the ZooKeeper interface over Tango in less than 1000 lines of Java code, compared to over 13K lines for the original [15] (however, our code count does not include ancillary code used to maintain interface compatibility, such as various Exceptions and application callback interfaces, and does not include support for ACLs). The performance of the resulting implementation is very similar to the TangoMap numbers in Figure 10; for example, with 18 clients running independent namespaces, we obtain around 200K txes/sec if transactions do not span names-
paces, and nearly 20K txes/sec for transactions that atomically move a file from one namespace to another. The capability to move files across different instances does not exist in ZooKeeper, which supports a limited form of transaction within a single instance (i.e., a multi-op call that atomically executes a batch of operations).
We also implemented the single-writer ledger abstraction of BookKeeper [30] in around 300 lines of Java code (again, not counting Exceptions and callback interfaces). Ledger writes directly translate into stream append operations (with some metadata added to enforce the single-writer property), and hence run at the speed of the underlying shared log; we were able to generate over 200K 4KB writes/sec using an 18-node shared log. To verify that our versions of ZooKeeper and BookKeeper were full-fledged implementations, we ran the HDFS namenode over them (modifying it only to instantiate our classes instead of the originals) and successfully demonstrated recovery from a namenode reboot as well as failover to a backup namenode.
7 Related Work
Tango fits within the SMR [42] paradigm, replicating state by imposing a total ordering over all updates; in the vocabulary of SMR, Tango clients can be seen as learners of the total ordering, whereas the storage nodes comprising the shared log play the role of acceptors. A key difference is that the shared log interface is a superset of the traditional SMR upcall-based interface, providing persistence and history in addition to consistency.
Tango also fits into the recent trend towards enabling strong consistency across shared systems via a source of global ordering; for example, Percolator [38] uses a central server to issue non-contiguous timestamps to transactions, Spanner [18] uses real time as an ordering mechanism via synchronized clocks, and Calvin [44] uses a distributed agreement protocol to order batches of input transactions. In this context, the Tango shared log can be seen as a more powerful ordering mechanism, since it allows any client to iterate over the global ordering and examine any subsequence of it.
Multiple systems have aimed to augment objects with strong consistency, persistence and fault-tolerance properties. Thor [32] provided applications with persistent objects stored on backend servers. More recently, Live Objects [37] layer object interfaces over virtual synchrony, OpenReplica [6] transparently replicates Python objects over a Paxos implementation, while Tempest [34] implements fault-tolerant Java objects over reliable broadcast and gossip. Tango objects are similar to the Distributed Data Structures proposed in Ninja [22, 23], in that they provide fault-tolerant, strongly consistent data structures, but differ by providing transactions across items, operations, and different data structures.
Tango is also related to log-structured storage systems such as LFS [39] and Zebra [24]. A key difference is that Tango assumes a shared log with an infinite address space and a trim API; internally, the shared log implementation we use implements garbage collection techniques similar to those found in modern SSDs.
The transaction protocol described in Section 3 is inspired by Hyder [11], which implemented optimistic concurrency control for a fully replicated database over a shared log; we extend the basic technique to work in a partitioned setting over multiple objects, as described in Section 4. In addition, the Hyder paper included only simulation results; our evaluation provides the first implementation numbers for transactions over a shared log. The original CORFU paper implemented atomic multi-put operations on a shared log, but did not focus on arbitrary data structures or full transactions over a shared log.
A number of recent projects have looked at new programming abstractions for non-volatile RAM [16]; some of these provide transactional interfaces over commodity SSDs [40] or byte-addressable NV-RAM [47]. Tango has similar goals to these projects but is targeted at a distributed setting, where fault-tolerance and consistency are as important as persistence.
8 Conclusion
In the rush to produce better tools for distributed programming, metadata services have been left behind; it is arguably as hard to build a highly available, persistent and strongly consistent metadata service today as it was a decade earlier. Tango fills this gap with the abstraction of a data structure backed by a shared log. Tango objects are simple to build and use, relying on simple append and read operations on the shared log rather than complex messaging protocols. By leveraging the shared log to provide key properties – such as consistency, persistence, elasticity, atomicity and isolation – Tango makes metadata services as easy to write as a MapReduce job or a photo-sharing website.
Acknowledgments
We’d like to thank Dave Andersen for shepherding the paper. We thank Phil Bernstein for making us think about shared log designs, Marcos Aguilera for valuable feedback on the system design and guarantees, Subramaniam Krishnan for help with the HDFS tests, Paul Barham for assistance with the sequencer implementation, and Shobana Balakrishnan for her input in the early stages of the project.
References
[38] D. Peng and F. Dabek. Large-scale incremental processing using distributed transactions and notifications. In OSDI 2010.
|
{"Source-Url": "https://www.microsoft.com/en-us/research/wp-content/uploads/2013/11/Tango.pdf", "len_cl100k_base": 13803, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 55881, "total-output-tokens": 17273, "length": "2e13", "weborganizer": {"__label__adult": 0.0003561973571777344, "__label__art_design": 0.0005736351013183594, "__label__crime_law": 0.0002727508544921875, "__label__education_jobs": 0.0007371902465820312, "__label__entertainment": 0.0001437664031982422, "__label__fashion_beauty": 0.00020396709442138672, "__label__finance_business": 0.00045228004455566406, "__label__food_dining": 0.0003299713134765625, "__label__games": 0.0008349418640136719, "__label__hardware": 0.003719329833984375, "__label__health": 0.00046133995056152344, "__label__history": 0.0005755424499511719, "__label__home_hobbies": 0.00013875961303710938, "__label__industrial": 0.0006232261657714844, "__label__literature": 0.0003151893615722656, "__label__politics": 0.0003209114074707031, "__label__religion": 0.0005965232849121094, "__label__science_tech": 0.19140625, "__label__social_life": 8.660554885864258e-05, "__label__software": 0.01812744140625, "__label__software_dev": 0.7783203125, "__label__sports_fitness": 0.00029158592224121094, "__label__transportation": 0.0008606910705566406, "__label__travel": 0.0002655982971191406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 73282, 0.01255]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 73282, 0.18682]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 73282, 0.89923]], "google_gemma-3-12b-it_contains_pii": [[0, 4053, false], [4053, 8763, null], [8763, 13934, null], [13934, 20578, null], [20578, 25215, null], [25215, 30986, null], [30986, 34711, null], [34711, 39542, null], [39542, 44533, null], [44533, 49453, null], [49453, 55055, null], [55055, 57734, null], [57734, 61492, null], [61492, 66751, null], [66751, 70917, null], [70917, 73282, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4053, true], [4053, 8763, null], [8763, 13934, null], [13934, 20578, null], [20578, 25215, null], [25215, 30986, null], [30986, 34711, null], [34711, 39542, null], [39542, 44533, null], [44533, 49453, null], [49453, 55055, null], [55055, 57734, null], [57734, 61492, null], [61492, 66751, null], [66751, 70917, null], [70917, 73282, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 73282, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 73282, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 73282, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 73282, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 73282, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 73282, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 73282, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 73282, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 73282, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 73282, null]], "pdf_page_numbers": [[0, 4053, 1], [4053, 8763, 2], [8763, 13934, 3], [13934, 20578, 4], [20578, 25215, 5], [25215, 30986, 6], [30986, 34711, 7], [34711, 39542, 8], [39542, 44533, 9], [44533, 49453, 10], [49453, 55055, 11], [55055, 57734, 12], [57734, 61492, 13], [61492, 66751, 14], [66751, 70917, 15], [70917, 73282, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 73282, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
c70f9ea9d3bd6877df994739d5a90ad857b84052
|
[REMOVED]
|
{"len_cl100k_base": 16169, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 54537, "total-output-tokens": 18517, "length": "2e13", "weborganizer": {"__label__adult": 0.00030922889709472656, "__label__art_design": 0.0005555152893066406, "__label__crime_law": 0.0003504753112792969, "__label__education_jobs": 0.0015964508056640625, "__label__entertainment": 0.00013887882232666016, "__label__fashion_beauty": 0.00013840198516845703, "__label__finance_business": 0.0004048347473144531, "__label__food_dining": 0.00018453598022460935, "__label__games": 0.0010881423950195312, "__label__hardware": 0.0007104873657226562, "__label__health": 0.00020551681518554688, "__label__history": 0.00030541419982910156, "__label__home_hobbies": 7.12275505065918e-05, "__label__industrial": 0.00018334388732910156, "__label__literature": 0.00038242340087890625, "__label__politics": 0.00018155574798583984, "__label__religion": 0.0002694129943847656, "__label__science_tech": 0.0186614990234375, "__label__social_life": 0.00010341405868530272, "__label__software": 0.034881591796875, "__label__software_dev": 0.93896484375, "__label__sports_fitness": 0.0001596212387084961, "__label__transportation": 0.00023317337036132812, "__label__travel": 0.00014460086822509766}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 81698, 0.03012]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 81698, 0.19377]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 81698, 0.92634]], "google_gemma-3-12b-it_contains_pii": [[0, 1968, false], [1968, 5635, null], [5635, 9022, null], [9022, 10993, null], [10993, 14688, null], [14688, 17561, null], [17561, 21730, null], [21730, 24864, null], [24864, 27963, null], [27963, 30383, null], [30383, 34113, null], [34113, 37231, null], [37231, 40808, null], [40808, 44784, null], [44784, 48474, null], [48474, 52439, null], [52439, 56251, null], [56251, 59968, null], [59968, 63716, null], [63716, 67466, null], [67466, 71413, null], [71413, 74989, null], [74989, 79450, null], [79450, 81698, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1968, true], [1968, 5635, null], [5635, 9022, null], [9022, 10993, null], [10993, 14688, null], [14688, 17561, null], [17561, 21730, null], [21730, 24864, null], [24864, 27963, null], [27963, 30383, null], [30383, 34113, null], [34113, 37231, null], [37231, 40808, null], [40808, 44784, null], [44784, 48474, null], [48474, 52439, null], [52439, 56251, null], [56251, 59968, null], [59968, 63716, null], [63716, 67466, null], [67466, 71413, null], [71413, 74989, null], [74989, 79450, null], [79450, 81698, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 81698, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 81698, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 81698, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 81698, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 81698, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 81698, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 81698, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 81698, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 81698, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 81698, null]], "pdf_page_numbers": [[0, 1968, 1], [1968, 5635, 2], [5635, 9022, 3], [9022, 10993, 4], [10993, 14688, 5], [14688, 17561, 6], [17561, 21730, 7], [21730, 24864, 8], [24864, 27963, 9], [27963, 30383, 10], [30383, 34113, 11], [34113, 37231, 12], [37231, 40808, 13], [40808, 44784, 14], [44784, 48474, 15], [48474, 52439, 16], [52439, 56251, 17], [56251, 59968, 18], [59968, 63716, 19], [63716, 67466, 20], [67466, 71413, 21], [71413, 74989, 22], [74989, 79450, 23], [79450, 81698, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 81698, 0.23301]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
59b339dda3ae4625c82a1acb6ee76a00fd9e55d1
|
[REMOVED]
|
{"Source-Url": "https://ulir.ul.ie/bitstream/handle/10344/5502/Buckley_2016_empirically.pdf?sequence=1", "len_cl100k_base": 10485, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 39246, "total-output-tokens": 11777, "length": "2e13", "weborganizer": {"__label__adult": 0.00039505958557128906, "__label__art_design": 0.0013952255249023438, "__label__crime_law": 0.0007882118225097656, "__label__education_jobs": 0.08099365234375, "__label__entertainment": 0.0004177093505859375, "__label__fashion_beauty": 0.0003445148468017578, "__label__finance_business": 0.00913238525390625, "__label__food_dining": 0.0004355907440185547, "__label__games": 0.001407623291015625, "__label__hardware": 0.0027923583984375, "__label__health": 0.0013036727905273438, "__label__history": 0.0007958412170410156, "__label__home_hobbies": 0.0004127025604248047, "__label__industrial": 0.000713348388671875, "__label__literature": 0.0015478134155273438, "__label__politics": 0.0005950927734375, "__label__religion": 0.0005664825439453125, "__label__science_tech": 0.302001953125, "__label__social_life": 0.0006780624389648438, "__label__software": 0.221923828125, "__label__software_dev": 0.3701171875, "__label__sports_fitness": 0.0003037452697753906, "__label__transportation": 0.0005450248718261719, "__label__travel": 0.0002930164337158203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54632, 0.02814]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54632, 0.26224]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54632, 0.93888]], "google_gemma-3-12b-it_contains_pii": [[0, 3099, false], [3099, 7214, null], [7214, 9668, null], [9668, 13926, null], [13926, 17775, null], [17775, 21662, null], [21662, 25539, null], [25539, 29320, null], [29320, 33589, null], [33589, 37750, null], [37750, 42120, null], [42120, 45940, null], [45940, 49962, null], [49962, 53133, null], [53133, 54632, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3099, true], [3099, 7214, null], [7214, 9668, null], [9668, 13926, null], [13926, 17775, null], [17775, 21662, null], [21662, 25539, null], [25539, 29320, null], [29320, 33589, null], [33589, 37750, null], [37750, 42120, null], [42120, 45940, null], [45940, 49962, null], [49962, 53133, null], [53133, 54632, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54632, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54632, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54632, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54632, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54632, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54632, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54632, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54632, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54632, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54632, null]], "pdf_page_numbers": [[0, 3099, 1], [3099, 7214, 2], [7214, 9668, 3], [9668, 13926, 4], [13926, 17775, 5], [17775, 21662, 6], [21662, 25539, 7], [25539, 29320, 8], [29320, 33589, 9], [33589, 37750, 10], [37750, 42120, 11], [42120, 45940, 12], [45940, 49962, 13], [49962, 53133, 14], [53133, 54632, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54632, 0.24725]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
abb3e3fd39653197313b7e8995f190678911321f
|
Privacy-Preserving Reengineering of Model-View-Controller Application Architectures Using Linked Data
Juan Manuel Dodero*, Mercedes Rodriguez-Garcia, Iván Ruiz-Rube and Manuel Palomo-Duarte
School of Engineering, University of Cadiz, Av. de la Universidad 10, 11519 Puerto Real, Cádiz, Spain
E-mail: juanma.dodero@uca.es, mercedes.rodriguez@uca.es, ivan.ruiz@uca.es, manuel.palomo@uca.es
*Corresponding Author
Received 18 March 2019; Accepted 22 November 2019; Publication 03 December 2019
Abstract
When a legacy system’s software architecture cannot be redesigned, implementing additional privacy requirements is often complex, unreliable and costly to maintain. This paper presents a privacy-by-design approach to reengineer web applications as linked data-enabled and implement access control and privacy preservation properties. The method is based on the knowledge of the application architecture, which for the Web of data is commonly designed on the basis of a model-view-controller pattern. Whereas wrapping techniques commonly used to link data of web applications duplicate the security source code, the new approach allows for the controlled disclosure of an application’s data, while preserving non-functional properties such as privacy preservation. The solution has been implemented and compared with existing linked data frameworks in terms of reliability, maintainability and complexity.
Keywords: Privacy by design, Web of data, Software architecture, Model-View-Controller.
doi: 10.13052/jwe1540-9589.1875
© 2019 River Publishers
1 Introduction
In the realm of a software system, confidentiality and privacy are non-functional properties aimed at protecting the system’s information resources. Such properties are especially relevant in the Web of data, largely concerned with procuring web applications that can publicly display their data and information, such that entities from heterogeneous information systems can be connected [20]. Reengineering an existing application for the Web of data must consider how to fulfil privacy preservation properties.
The challenge concerns Privacy by Design (PbD) principles, which consider privacy as an essential property to be considered during the design phase and throughout the entire engineering lifecycle [22]. Engineers have to go beyond functional requirements and be especially responsive to PbD aspects when engineering software artefacts that deal with personal data [4]. Since confidentiality concerns ethical and legal aspects that are beyond the scope of this paper, in the following we focus on the technical aspects related with data privacy preservation of software systems.
Adding non-functional properties, such as privacy preservation, to an already built software system is generally more expensive than taking proper measures while designing its architecture [49]. How expensive it is to take security measures on an existing software system depends on a number of factors, such as: the number, type and scope of security properties; the size and complexity of the information system architecture; and the affordances and constraints of the methods and software technologies used to engineer the reconstruction. Regarding the latter, existing methods to reengineer a legacy application for the Web of data are based on the thorough analysis of the documented specifications, design diagrams and software manuals that describe its architecture [53]. They overlook, however, a first-class element of software architecture and design, which is the source code.
Source code can be used to gain valuable insights about the design and architecture of a legacy web application. The hypothesis of this work is that reengineering for privacy a web application architecture at the level of source code can provide an advantage in terms of reliability and maintainability when obtaining an extended version of the application that considers PbD properties, which might have been overlooked in the original version.
Source code-level interventions to improve an application’s security and privacy are inspired by and framed in the Security by Design (SbD) and PbD principles, which have significant expressions in Role-Based Access
Control (RBAC) and Privacy-Preserving Data Publishing (PPDP), among other techniques [15].
When it comes to adding PbD protections, it is difficult to imagine a broad-spectrum solution to tame the size, complexity and architectural diversity of web applications. Nonetheless, the large number of web applications based on the Model-View-Controller (MVC) pattern architecture and its derivatives [8] is a Pareto argument in favor of limiting the scope. Therefore, it is reasonable to focus on MVC-based applications in the Web of data as the target application architectures that can be hopefully reengineered.
The realm of such applications has to be focused in software technologies that have been successfully used to publish data and information resources in the Web of data. In this vein, Linked Data (LD) methods have proven as useful to prepare web applications with standard vocabularies and schemata in order to publish and link the application data and resources [23]. Best practices for publishing linked data define how data can be published and linked by means of standard technologies such as RDF and JSON-LD. Providing an existing application with such capabilities encompasses to define a metadata schema, to compile and link the application data, and to provide an Application Programming Interface (API) that enable third parties to browse the application resources based on public metadata [18]. Numerous semantic methods and software frameworks have been used to engineer LD-enabled versions of web applications and information systems [53]. Recently, PbD and privacy preservation are prevalent concerns in the LD research field [27].
Our contribution is a new PbD approach based on LD technologies, used to reengineer MVC-based web applications that improve reliability and maintainability as relevant quality features in the reengineered application, after incorporating the confidentiality and data privacy preservation requirements.
To investigate the former hypothesis, we have followed a design-and-creation information systems research methodology [35]. It involves the steps of awareness, suggestion, development, evaluation and conclusion [52], which constitute the structure of the rest of the paper. After PbD issues are described in Section 2 as part of the awareness step, Section 3 analyzes the existing LD architectures and frameworks in order to suggest the reengineering for privacy PbD strategy. At the end of the suggestion step, we propose a first contribution, consisting in an original classification of current LD reengineering strategies. Section 4 develops, as the main contribution, a general linked data reengineering framework, named EasyData, which is applied to provide legacy MVC-based web applications with privacy preservation properties. Section 5 includes the evaluation of EasyData against other
comparable frameworks, along with a discussion of the research results and their limitations. Finally, Section 6 presents some conclusions and future lines of research.
2 Data Privacy by Design
When data about individuals are involved, special care must be taken to avoid privacy violations. Data privacy by design [6] implies that sanitization approaches, based on removing identifiers, are not enough to preserve individuals’ privacy, because certain combinations of non-identifying personal data, known as quasi-identifiers (QI) [9], may be linked with other information sources to re-identify them [44]. Nowadays, the amount of available information and data sources along with the increasing computational power facilitate to conduct such re-identifications. Because re-identification constitutes a real privacy threat and the protection of individuals’ privacy is a fundamental right, legal regulations [14, 51] have set out the need for adequately protecting personally identifiable information (PII) [34], which is any information about an individual that can be used to distinguish or trace her identity (e.g., name or birth date) and any other information that is linked or linkable to her identity (e.g., medical, educational, financial and employment data).
To secure PII confidentiality, PPDP techniques are used that generate a transformed version of data that changes the PII it contains, while at the same time offering data that is valid for statistical analysis [21]. In order to address the current obligations for PII protection and, thus, offer ex ante privacy guarantees against identity disclosure, the design of a web application that publishes individuals’ data (e.g. a healthcare company web application used by their clients) must consider the diversity of PbD methods and techniques as a first-class requirement. Data privacy preservation can be implemented in legacy web applications when transforming their architecture to a LD-enabled one. This usually involves extending the legacy application with added middleware components [19], which have to duplicate the implementation of diverse non-functional properties like security. A lot of LD techniques and software tools exist to map relational data sources [48], interlinking datasets [54] and exposing LD APIs as middleware [17]. As for the general software systems, these approaches have proven costly and not absent of significant risk [27,53].
2.1 A Motivating Example
Despite the policies that legally regulate the use of web data sources [6], and in spite of the fact that data items must be anonymized before publishing an application’s data, one cannot impede someone from knowing sensitive information [40]. This is especially worrisome in the light of linked data applications.
For instance, let’s suppose $DS_1$ is the dataset of a tax registry web source, having the attributes $address$, $birthdate$, $sex$, $postcode$, $name$ and $taxes$; and $DS_2$ is the dataset from another web source to consult energy consumption, containing the attributes $birthdate$, $sex$, $postcode$, $electricityConsumption$ and $gasConsumption$. Even removing explicit identifiers, an individual’s $name$ in $DS_1$ can be linked with another record in $DS_2$ through the combination of $postcode$, $birthdate$ and $sex$ attributes. Each attribute value does not uniquely identify a record owner, but linking data from both applications forms a QI that might point to a unique or small number of records. The attacker can thus notice that one house at a certain address might be unoccupied because its $electricity$ and $gas$ consumption are almost nil. This can pose a threat about burglary, but it can be also a tool for tax agencies to investigate occupied rental houses that might have unpaid taxes from the lessor.
Even anonymizing the combination of datasets by means of generalization techniques on the QIs, there is a possibility that QIs are split in two datasets after linking them for a given analysis. For instance, let the $DS_1$ schema be $(userId, sex, postalAddress, defaultRisk)$, and let $DS_2$ schema be $(userId, occupation, defaultRisk, electricityConsumption, gasConsumption)$, as shown in Table 1. Assuming that a data analyst needs to combine $DS_1$ and $DS_2$ to predict, let’s say, the risk of finance default, then $DS_1$ and $DS_2$ can be linked and merged by matching the $userId$ field in a new dataset $DS$ that is then anonymized. Then the sex and occupation attributes form a new QI, which was not included in each dataset separately, so a linking attack is still possible on such fields of $DS$. After integrating the tables of both datasets, the $(Female, Carpenter)$ individual on the $(sex, occupation)$ attribute pair becomes unique and vulnerable to link sensitive information, such as $postalAddress$ and $energyConsumption$.
Because the ultimate motivation of data releasing is to conduct data-driven analyses, sanitizing and anonymization should be done in a way that the protected data still retain as much analytical utility as possible; that is, the conclusions extracted from the analysis of the anonymized dataset should be
Table 1 Linked data items of an example linking the datasets from a tax registry application and an energy consumption application
<table>
<thead>
<tr>
<th>userId</th>
<th>default</th>
<th>postal</th>
<th>risk</th>
<th>Address</th>
<th>occupation</th>
<th>electricity</th>
<th>gas</th>
<th>Consumption</th>
</tr>
</thead>
<tbody>
<tr>
<td>1–3</td>
<td>0y3n</td>
<td>M</td>
<td>A1</td>
<td>Sales</td>
<td>18</td>
<td>17</td>
<td></td>
<td></td>
</tr>
<tr>
<td>4–7</td>
<td>0y4n</td>
<td>M</td>
<td>A2</td>
<td>Ceramist</td>
<td>24</td>
<td>8</td>
<td></td>
<td></td>
</tr>
<tr>
<td>8–12</td>
<td>2y3n</td>
<td>M</td>
<td>A3</td>
<td>Plumber</td>
<td>25</td>
<td>10</td>
<td></td>
<td></td>
</tr>
<tr>
<td>13–16</td>
<td>3y1n</td>
<td>F</td>
<td>A4</td>
<td>Webmaster</td>
<td>20</td>
<td>17</td>
<td></td>
<td></td>
</tr>
<tr>
<td>17–22</td>
<td>4y2n</td>
<td>F</td>
<td>A5</td>
<td>Animator</td>
<td>31</td>
<td>11</td>
<td></td>
<td></td>
</tr>
<tr>
<td>23–25</td>
<td>3y0n</td>
<td>F</td>
<td>A6</td>
<td>Animator</td>
<td>34</td>
<td>10</td>
<td></td>
<td></td>
</tr>
<tr>
<td>26–28</td>
<td>3y0n</td>
<td>M</td>
<td>A7</td>
<td>Carver</td>
<td>32</td>
<td>12</td>
<td></td>
<td></td>
</tr>
<tr>
<td>29–31</td>
<td>3y0n</td>
<td>F</td>
<td>A8</td>
<td>Carver</td>
<td>30</td>
<td>14</td>
<td></td>
<td></td>
</tr>
<tr>
<td>32–33</td>
<td>2y0n</td>
<td>M</td>
<td>A9</td>
<td>Carpenter</td>
<td>33</td>
<td>11</td>
<td></td>
<td></td>
</tr>
<tr>
<td>34</td>
<td>1y0n</td>
<td>F</td>
<td>A10</td>
<td>Carpenter</td>
<td>29</td>
<td>15</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Similar to those of the original dataset. With the goal of balancing privacy and utility preservation, PPDP methods [15] have been used to sanitize published datasets by modifying the original QI attributes while preserving certain statistical features.
2.2 Reengineering for Privacy Preservation
Different privacy models can be considered to define the sanitizing conditions. One of the most widely used anonymization models is $k$-anonymity [21]. The idea underlying $k$-anonymity [45] is to homogenize the QI attributes to make them indistinguishable in groups of at least $k$ records, thus limiting to $1/k$ the probability of re-identification. Two distortion methods can be used to enforce $k$-anonymity, i.e. generalization and microaggregation. The generalization method [45] homogenizes the quasi-identifiers with the most specific superclass of the $k$-record group, while the microaggregation method [32] homogenizes the quasi-identifiers with the average of the $k$-record group. In previous works, PPDP methods have been improved to exploit the semantics of nominal values and replace them by concepts in an ontology [41, 42].
All these are semantic privacy-preserving techniques that, usefully implemented into a linked data application, facilitate the fulfillment of privacy properties. The issue here is how to engineer privacy properties into an existing web application, i.e. reengineering for privacy preservation. For instance, let’s consider for the first example of the previous section the development of a privacy-preserving version that follows a layered security architecture [50]. The security controls are usually implemented on top of the data model to
provide linked versions of user identifiers and other PII, potentially QIs, such as postcode, birthdate and sex. Implementing a new controller operation to link a user ID with its PII, however, might impose a restriction related with data privacy. The logic for the $k$-anonymity privacy preservation model, for instance, is normally implemented in the controller component. To be fair, it should be duplicated in the database mapping code as well as in the database stored procedures. Code duplication in different architectural layers overly reduces the reusability and maintainability of an application. Implementing the privacy restrictions only at one layer can pose a design-level impediment, since third party applications (e.g. a mobile app) do not necessarily access to the same controller components. Therefore, some browsers might overlook the data privacy-preserving controls. In general, duplicating the controller logic to implement changeable security properties is not a good practice.
3 Reengineering MVC-based Applications
The redesign of a web application to include SbD or PbD properties is more expensive than considering such requirements from scratch, but often it is unavoidable. Regular web applications’ architecture can be tackled at any of the three layers of the MVC pattern, namely the data binding model, the web view and the controller logic. Interventions at the view level, known as web scraping or harvesting, consist in directly accessing the application HTTP interface to extract the data that is published as HTML. It usually requires some type of license agreement with the application owner, but the discussion on this is out of scope of our research. Therefore, we constrain the discussion to the controller and data binding layers of the MVC architecture.
Confidentiality requirements, such as access authorization or privacy preservation, are often implemented as part of the application business logic. Web applications are not usually designed to implement their business logic in the data layer (for instance, as stored procedures of the database), but in intermediate controller components instead. The controller and data binding components of an existing MVC application have been largely explored as alternative points where to provide data access [5, 10, 17]. Security requirements can be implemented in the controller layer, as some frameworks do—e.g. Spring Security\footnote{https://spring.io/projects/spring-security} maps permissions and access authorizations to each controller function. On the other hand, we can grant RBAC permissions over an application’s data at the data binding or the database level. Then,
new controller operations that need to access and render model data will not be protected against unauthorized access. For instance, consider that mobile apps usually include a separate controller layer implementation in the overall architecture. Access control code has to be duplicated in the potentially multiple implementations of controllers, as well as in the data binding layer or even in the database logic. For layered security requirements, RBAC grants and permissioned stored procedures should be implemented also in the database, which might be a source of code duplication. In sum, SbD and PbD are concerns that involve the overall MVC architecture of the application.
3.1 Analysis of Linked Data Architectures and Frameworks
The architecture of linked data applications or Linked Data Application Architecture (LDAA) is a means to structure the components a LD software system comprises [19]. An extension to the LDAA has been implemented upon a linked data API layer [17] on top of a data access layer (see Figure 1) to mediate between consumer applications and the data sources. As described in [19], the most widespread LDAA is the crawling pattern, which is suitable for implementing linked data applications over a growing set of resources that are crawled and discovered. On the top layer, the crawling architecture has a pipeline of modules, i.e. web access, vocabulary mapping, identity resolution and quality evaluation. An RDF API or SPARQL endpoint is served by such modules, which form the data access, integration and storage layer.
Pipelining all the functional modules of the data access and integration layer eventually leads to the integrated database, which feeds the SPARQL endpoint or API mediator module. In the bottom, the publication layer usually implements wrapper modules that, either by web scraping [38] or enriching [26], add semantics to existing resources and data. Setting up a middleware module is a common strategy to reengineer existing sources, which can range from HTML pages to structured data to web APIs [36]. Other approaches harvest semi-structured HTML content and automatically convert it into structured linked data instances [29]. Scraping and data wrapping techniques either convert data to linked data or provide an API to access data [23].
Thanks to an LD framework, reengineering of a legacy application can be implemented at the data binding model, the web view or the controller layers of its MVC architecture. Next, we analyze where in the MVC layers each framework operates.
Figure 1 Extension to the original LDAA [19] with additional functions for the data, integration and storage layers [17].
- Apache Stanbol, KIM and SDArch [26] are examples of semantic enriching procedures that operate at the view level.
- The D2RQ server [5], Triplify [3] and Virtuoso RDF Views [12] are useful approaches to build wrappers at the data binding level. ActiveRDF can be used to align an application Object-Relational Mapping (ORM) component with a given RDF schema [37].
- Middleware implementations, such as Virtuoso sponger [13] and Pubby, work at the controller level. Hydra [28] is a middleware implementation that also provides clients with JSON-LD descriptions of a new vocabulary, able to express common concepts of Web APIs. Other solutions,
such as the Datalift platform [46] rely upon existing tools such as Silk [24] to provide interlinked RDF datasets.
3.2 LDAA Reengineering Strategy
Reengineering an MVC-based application at the source code level can be an advantage to provide a reliable and maintainable extension that incorporates additional properties. Clearly, this approach becomes feasible as long as the application source code is available. It has also some constraints and limitations that will be discussed later.
Before articulating the suggestion phase of the research methodology, we have participated in the development of linked open data systems for a number of disciplines, such as Information Science (IS) [25] and Software Process Management (SPM) [43], in which we used the LD tools and frameworks analyzed above. As a consequence, a number of methodological and practical considerations for LDAA reengineering have emerged and influenced the proposed methodology.
4 Proposed Methodology for LDAA Reengineering
We have defined a LDAA reengineering methodology that considers a number of application features, in order to decide the applicable reengineering practices. Such features are: (i) the availability of source code, (ii) the provision of APIs or built-in information exposure services, and (iii) the concealment level for enclosed data. The effort required by the reengineering practices range from a seamless API-based integration of LD-enabled applications, to costly adaptions for those that might not use machine-friendly data formats and protocols. The reengineering methodology is graphically summarized in Figure 2.
4.1 Reengineering Methodology
The methodological aspects have to consider the application architecture. In this vein, the reengineering strategies have been classified as scraping, wrapping, and extension, as depicted to Figure 3. Such strategies can be applied either at the data level or the API level. The following classification is considered as a first contribution of the paper, emerging from a thorough analysis of existing LD frameworks and the prior experience using them to build LD-enabled applications.
• **LDAA data scraping** [26,38]: This strategy applies if the web application source code and internal data storage are not available at all, probably because the application was not initially designed for third-party reusing. Information retrieval, web scraping and harvesting techniques are the practicable reengineering alternatives.
• **LDAA data wrapping** [3, 5, 12, 46]: Sometimes the application’s source code is not available, but data is available in an open format. Then, adapters or data wrappers can transform LD requests into queries to the application data storage. Depending on the kind of storage, queries can be issued to database systems, structured files or any other data storage system used by the application.
• **LDAA API wrapping** [1, 2, 10]: This is a practical choice when the application already provides an external API for reusing data and information. Then a proxy, wrapper or middleware component is implemented, so that LD requests are formatted for the API and issued
forth and back. The wrapper or middleware can implement some data transformations and adaptations on top of the original API operations.
- **LDAA API extension**: If the application does not provide an external API, but its source code is available, a software add-on can be implemented to provide the LD API. In this case, data and business models can be discovered from source code analysis of the MVC implementation. On one hand, if applied at the model layer, the extension strategy generates a LD schema from the internal data model implementation. The schema and data instances can be revealed through an external API. The local namespace for the schema generated in this way can initially reflect the application’s internal data model. Yet it can be aligned with standard LD vocabularies through user-defined configurations before publishing, as in the LDAA data wrapping case. On the other hand, if applied at the controller level, the API extension strategy can use the existing implementation in order to avoid code duplication. Extending the API does not consist only in wrapping the existing controller implementation (i.e., the internal API) to make it public as a functionally equivalent API, because the external API requirements might not coincide with the internal one’s.
The LDAA API extension strategy makes it easier to implement extended features as part of an enriched API. This is an opportunity to include a set of additional, either functional or non-functional features. For instance, the internal API of a legacy application can implement some finder methods that return all objects of a given type. The external API, however, may require to define additional findBy methods that return only the objects that fulfill a given filtering condition. The latter is simply a functional extension of the existing API. On the other hand, different security privileges can be granted for the find and findBy methods, or for diverse executions of the same method, depending on the calling user’s role. Even the data output from a method call can be sanitized after applying a custom PPDP policy. In the following we will focus on how our LDAA extension strategy is developed to include such privacy preservation properties.
4.2 Privacy-preserving LDAA Extension
Whereas data scraping and wrapping techniques are commonly used to add semantics to existing web applications, we propose a new extension approach that can be used to expose the internal structure and data model of a legacy app as linked data in a controlled and privacy-preserving way.
EasyData is the name of a new LDAA extension approach to reengineer legacy MVC-based web applications so as to provide them with additional non-functional properties. It has been used to implement privacy preservation requirements as a type of security property. The reengineering cycle consists of a number of functional steps, which can be mapped to regular LDAA modules [17] as explained next.
1. Revealing the underlying application data model: A linked data model equivalent to the application ORM schema is generated and published as RDF. In addition, upon a web application’s request, RDFa and microdata annotations are generated and embedded into the response HTML view. To facilitate external linking with standard vocabularies, metadata mappings can be configured. In this stage, the functionalities of the LDAA web access and vocabulary mapping modules are developed.
2. Linking the application data instances: The linked datasets retrieved from the application’s internal data storage can be processed and linked. Internal data items can be directly linked. Afterwards, an external interlinking module can be used to link external resources [54]. A complementary study on how interlinking tools can help data publishers to connect their resources to the Web of data can be found elsewhere [39].
In combination with a proper interlinking tool, this phase develops the functionalities of the LDAA identity resolution module.
3. **Controlling** the target non-functional quality properties for the application. Considering the scope of our work, security and privacy preservation of data and information resources are such quality aspects. In this phase, the PPDP techniques described above can be seamlessly applied for each published data item and data type. This phase is part of the LDAA quality evaluation.
### 4.3 Implementing the PbD Interventions
Two different prototypes have been implemented to illustrate and test the EasyData LDAA extension strategy. Each prototype enables the procedure to be applied with two different development languages and open source frameworks, which underpin the architecture of a considerable number of MVC-based web applications. The first is a ruby gem\(^2\) used to reengineer ruby-on-rails web applications following the LDAA extension strategy; the second is a python add-on\(^3\) used to deliver LDAA extensions of applications built with the Django framework.
Next, we show how the steps of the EasyData reengineering strategy can be performed using one of the EasyData implementation tools. The Redmine\(^4\) open source project management application is used as a frame example to illustrate how the process can be carried out on a legacy web application.
#### 4.3.1 Revealing the Application Data Model
The first step is to generate and publish an RDF model equivalent to the application’s data model. A simplified schema of Redmine data is formed by the `Project`, `Issue`, `User` and `TimeEntry` classes, as illustrated in the Rails implementation of Figure 4. EasyData can render the RDF model from the web application source code, as shown in Figure 4. The `set_rdf_model_name` configuration option defines the alignment of the application data model elements with concepts and properties of a standard RDF schema. In this example, the Redmine `Project` objects are mapped to DOAP `projects`, the Redmine `User` objects are mapped to FOAF `persons`, and the Redmine `TimeEntry` objects are mapped to OWL-Time `durations`. Redmine `Issue`
Namespace.register(
:doap, "http://usefulinc.com/ns/doap#")
Namespace.register(
:foaf, "http://xmlns.com/foaf/0.1/"
Namespace.register(
:time, "http://www.w3.org/2006/time#")
class Project < ActiveRecord::Base
has_many :issues
set_rdf_model_name "doap:Project"
end
class Issue < ActiveRecord::Base
@status = IssueStatus::OPEN
belongs_to :project
set_rdf_model_name "xmlns:Issue"
end
class TimeEntry < ActiveRecord::Base
@spent = 0
set_rdf_model_name "time:DurationDescription"
end
class ProjectTimeEntry < TimeEntry
set_rdf_property_name "xmlns:MemberFor"
end
class IssueTimeEntry < TimeEntry
set_rdf_property_name "xmlns:AssignedFor"
end
class User < ActiveRecord::Base
has_many :projects,
:through => :projectTimeEntries
has_many :issues,
:through => :issueTimeEntries
set_rdf_model_name "foaf:Person"
end
Figure 4 Specifying an application’s data model with EasyData in the Ruby implementation of Model components.
objects are not mapped to elements from an external vocabulary, since programmers could not find a standard vocabulary defining what a tracking issue is.
4.3.2 Linking Application Data Instances
External linking targets can be added to the application by means of template tags. Instead of linking to the inner application model entities revealed in the previous step, the application view can be provided with links to other entities discovered by an external interlinking tool. For example, an interlinking process configured to match the Redmine data revealed by EasyData with a DBPedia dataset can map Redmine’s Project with DBPedia’s Project
resource type, Issue with DBPedia’s Issue_tracking_system and User with DBPedia’s User_(computing). EasyData template tags can be used to include links to such DBPedia entities to configure this mapping.
4.3.3 Controlling Authorized Access
This is the first part of the controlling phase, which is applied twice: one for security access control and another for privacy preservation. Access control grants can be configured for data items, data types and service operations generated in the previous steps. Figure 5 is an example of how the MVC controllers are configured with the has_permission_on and filter_access_to options. That permits access to Project and Issue resources as well as the getIssues and getAssignedIssues operations in a Redmine instance. The example defines access permissions for specific user roles (e.g., admin and analyst) and operations (e.g., create, read, update and delete). Note that this security configuration is a simple extension of the available Rails configuration and does not need to be repeated elsewhere in external LD wrappers, thus reducing the dispensable code smells.
4.3.4 Controlling Data Privacy Preservation
This is the second part of the controlling phase, aimed at privacy-preserving data publishing. The datasets retrieved from the web application database can be configured to be sanitized before release, thereby offering privacy guarantees against identity disclosure. The guarantees are achieved in the example by setting certain privacy requirements to yield \(k\)-anonymous datasets. Figure 6 shows an example of how the MVC controllers are configured with new sanitizing rules and queries. The \(k\)-anonymity symbol defined in Ruby specifies the PPDP method that yields \(k\)-anonymous the data records from a query, with generalize and microaggregate the available value options. The \(k\_arg\) option specifies the desired value of \(k\), which determines the privacy degree of the resulting data records. The higher the \(k\), the higher the privacy degree of the result, but the lower its analytic utility will be. Finally, the set of QI attributes to be sanitized is defined with the quasi_id option.
The example defines a configuration to sanitize the output data records from the getAssignedIssues\(^5\) controller function, which has also an access
\(^5\)The getAssignedIssues function actually returns the results of a query that joins a set of attributes from the Projects, Issues and User tables
Figure 5 Integrating security access features with EasyData in the Ruby implementation of Controller components.
control filter as specified by the filter_access_to option. The output dataset will be 4-anonymous via generalization of the QI formed by project_name, issue_name, organization and start_date.
5 Evaluation
In this section we evaluate the validity of the EasyData PbD reengineering method to test the hypothesis that it can provide an advantage in terms of reliability and maintainability over other LD solutions. Therefore, we compare EasyData with a baseline of five LD frameworks, which were analyzed in Section 3.1. A thorough inspection was carried out on the software libraries implementation, in order to filter out components that do not provide an equivalent function to our solution’s, or have nothing in common between compared frameworks. To ensure a comparable scope and to avoid bias in the filtering criteria, all the library components were carefully analyzed by experts who had previously built IS [25] and SPM applications [43] using these LD frameworks.
In order to understand and analyze the advantages, a benchmark is performed on a number of static analysis metrics of reliability, maintainability and complexity. Measures have been computed with the SonarQube source code static analysis tool. The compared frameworks have been selected as long as they implement LDAA software component modules for either data or API wrapping approaches, they are implemented in a language that can be statically analyzed, and the source code is openly available.
5.1 Measures
The software metrics chosen for static analysis enable to compare software reliability and security, maintainability, and size and complexity, among other features. Except for the size metric, the reliability, maintainability and complexity metrics delivered by SonarQube are language-independent, so the tools can be compared despite their implementation language. The following are the types of metrics provided:
- **Size and complexity**: measurements of size and complexity of the code.
- \(\text{LOC}\): physical Lines of Code; physical LOCs are a simple, source code-dependent measure of the program size.
- \(\text{Statements}\): number of statements in the source code; SonarQube unifies this metric and makes it independent of the parsed language.
- \(\text{Functions}\): number of functions in the source code.
- \(\text{CC}\): Cyclomatic Complexity (CC), computed based on the number of control flow paths through the code [33]. SonarQube varies slightly the standard calculation, depending on the implementation language.
- \(\text{CC Density (CCD)}\): density measured as the average CC per statement in the source code; it provides a program size-independent measurement of complexity, which is demonstrated to be a useful predictor of software maintenance productivity [16].
- **Maintainability and code duplication**: amount of code involved in duplications.
- \(\text{Code smells}\): the number of code smells, as symptoms in the source code that may indicate a deeper problem.
- \(\text{Technical debt (TD)}\): the effort to fix all maintainability issues, measured as hours of required work to remediate the issues, or
---
\(^6\)https://www.sonarqube.org/
the ratio between the cost to develop the software and the cost to fix it. This ratio is computed as the remediation cost divided by the development cost. The development cost is estimated as 0.06 days (i.e., nearly 30 minutes) per line of code [7].
- **Lines**: number of duplicated lines in the target language.
- **Blocks**: number of duplicated blocks of lines in the target language.
- **Density**: a measurement of the density of code duplication (i.e., number of duplicated lines / overall LOC).
- **Reliability and security**: measurements of reliability and security of the source code.
- **Bugs**: the number of bugs, as a measure of software reliability, and a estimated amount of hours for remediation.
- **Vulnar**: the number of known vulnerabilities found, as a measure of software security, and a estimated amount of hours for remediation.
These metrics are not completely independent from each other. For example, size and complexity metrics are a clear indicator of software maintainability [16], while code duplication is a kind of code smell known as dispensable code, meaning a portion of unnecessary code that, if properly removed, would make the code cleaner, more efficient or easier to understand. Code dispensability is related with the technical debt. The reason to disclose such metrics separately is to check the reliability and maintainability of the different software solutions due to different causes that might be improved.
The more complex software frameworks are, the more functions they implement. Consequently, the entire source code of software frameworks should not be analyzed. The source code analysis should only cover the software modules of each framework concerned with the wrapping and linked data conversion functions that are common to the LDAA. Some frameworks or software tools are small and only perform such functions. Therefore, the source code of larger frameworks has been inspected in detail to filter out modules that implement non-comparable functions. The excluded modules have been those implementing certain functions of the data access, integration and storage layer (e.g., Sesame SPARQL implementations and Silk interlinking libraries, among others) as well as tool-specific functions that are not related with the rest of tools (e.g., Stanbol’s semantic enrichment of contents). The list of modules that have been included in the analysis of each tool can be examined in the appendix.
5.2 Results
As shown in Tables 2 and 3, the EasyData implementation considerably reduces CC and TD values. Since such measurements are dependent on the program size, it is more accurate to observe the TD ratio and CC density to compare different solutions. In this vein, the TD ratio and CCD are lower for EasyData than for other solutions. The reduced TD has an influence in the maintainability of the solution.
On the other hand, all solutions present a smaller code duplication density than EasyData (see code duplication metrics in Table 3). Some frameworks, such as Hydra and Triplify, also present a better reliability and security remediation cost, measured as remediation hours required to fix bugs and vulnerabilities. That means a need for improvement of the EasyData implementation. Yet the number of vulnerabilities (see Table 4) is fewer for EasyData and HydraBundle, mainly because they make a less intensive use of existing libraries and components that might add security issues.
With respect to the size and complexity (see Table 2), HydraBundle, EasyData and Triplify have lesser complex implementations than other frameworks. Tools like D2Rq and Datalift add an extra complexity, because
<table>
<thead>
<tr>
<th>Tool</th>
<th>LOC</th>
<th>#statements</th>
<th>#functions</th>
<th>CC</th>
<th>CCD</th>
</tr>
</thead>
<tbody>
<tr>
<td>D2Rq</td>
<td>14,108</td>
<td>6,473</td>
<td>1,516</td>
<td>3,239</td>
<td>0.50</td>
</tr>
<tr>
<td>Stanbol</td>
<td>4,701</td>
<td>1,887</td>
<td>352</td>
<td>724</td>
<td>0.38</td>
</tr>
<tr>
<td>HydraBundle</td>
<td>2,354</td>
<td>1,098</td>
<td>187</td>
<td>476</td>
<td>0.43</td>
</tr>
<tr>
<td>Triplify</td>
<td>1,352</td>
<td>818</td>
<td>79</td>
<td>398</td>
<td>0.49</td>
</tr>
<tr>
<td>Datalift</td>
<td>16,037</td>
<td>7,009</td>
<td>1,349</td>
<td>3,043</td>
<td>0.43</td>
</tr>
<tr>
<td>NAME</td>
<td>3,773</td>
<td>2,195</td>
<td>133</td>
<td>478</td>
<td>0.22</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Tool</th>
<th>#code smells</th>
<th>hours</th>
<th>ratio</th>
<th>#lines</th>
<th>#blocks</th>
<th>density</th>
</tr>
</thead>
<tbody>
<tr>
<td>D2Rq</td>
<td>805</td>
<td>84.5</td>
<td>1.25%</td>
<td>555</td>
<td>31</td>
<td>3.93%</td>
</tr>
<tr>
<td>Stanbol</td>
<td>0</td>
<td>44.9</td>
<td>1.99%</td>
<td>229</td>
<td>16</td>
<td>4.87%</td>
</tr>
<tr>
<td>HydraBundle</td>
<td>107</td>
<td>15.2</td>
<td>1.35%</td>
<td>369</td>
<td>4</td>
<td>15.68%</td>
</tr>
<tr>
<td>Triplify</td>
<td>166</td>
<td>18.5</td>
<td>2.85%</td>
<td>0</td>
<td>0</td>
<td>0.00%</td>
</tr>
<tr>
<td>Datalift</td>
<td>1,028</td>
<td>112.3</td>
<td>1.46%</td>
<td>2382</td>
<td>53</td>
<td>14.85%</td>
</tr>
<tr>
<td>NAME</td>
<td>21</td>
<td>9.1</td>
<td>0.50%</td>
<td>687</td>
<td>79</td>
<td>18.21%</td>
</tr>
</tbody>
</table>
716 J. M. Dodero et al.
Table 4 Reliability and security of NAME compared to other LD frameworks
<table>
<thead>
<tr>
<th>Tool</th>
<th>#bugs</th>
<th>#vulner</th>
<th>#bugs</th>
<th>#vulner</th>
<th>Remediation effort (h)</th>
</tr>
</thead>
<tbody>
<tr>
<td>D2Rq</td>
<td>19</td>
<td>205</td>
<td>17</td>
<td>240</td>
<td>445</td>
</tr>
<tr>
<td>Stanbol</td>
<td>28</td>
<td>415</td>
<td>22</td>
<td>260</td>
<td>675</td>
</tr>
<tr>
<td>HydraBundle</td>
<td>14</td>
<td>280</td>
<td>0</td>
<td>0</td>
<td>280</td>
</tr>
<tr>
<td>Triplify</td>
<td>5</td>
<td>100</td>
<td>1</td>
<td>30</td>
<td>130</td>
</tr>
<tr>
<td>Datalift</td>
<td>51</td>
<td>430</td>
<td>96</td>
<td>1,175</td>
<td>1,605</td>
</tr>
<tr>
<td>NAME</td>
<td>39</td>
<td>335</td>
<td>0</td>
<td>0</td>
<td>335</td>
</tr>
</tbody>
</table>
they make use of a lot of handful libraries that have to be properly integrated and managed in the source code. This criterion is less relevant for EasyData, since the source code of the eventually extended LD application is automatically generated by the framework, so the need to manage code complexity issues, which have to do with the programmers’ difficulty for code maintenance, is less relevant.
5.3 Discussion
The EasyData LDAA extension strategy can be exploited to fulfill non-functional features that might be defined for an existing web application. Its aim is not to define a new technique for linking heterogeneous linked data schemata and LD datasets (i.e. interlinking). One reason is that including an interlinking feature in EasyData might constrain the evolution of generated RDF models, whilst the interlinking function can be carried out through readily available tools, as explained elsewhere [54].
As opposed to the black-box wrapping approaches for adding linked data or converting an application’s output to linked data, the white-box extension approach of EasyData enables to modify the application components that are needed. When doing that, diverse non-functional requirements can be readily implemented. Thus, this white-box strategy is essential to fulfill security and privacy preservation requirements. Data records that may constitute a publicly relevant dataset are never disclosed from the underlying application implementation without being in the first place secured and privacy-preserved, with considerable savings in complexity and reliability.
The EasyData reengineering method combines the LDAA data wrapping and LDAA API extension in an integrated approach, which exposes the application data model and generates a new LD API, providing also a hook
Privacy-Preserving Reengineering of MVC Application Architectures Using LD
to implement diverse non-functional requirements. In its current implementation, EasyData can provide a unified security access control layer, similar to other LD frameworks, additionally to privacy preservation measures, which are not as common. Thus, external agents can be properly authorized to browsing and accessing an application’s resources under a set of privacy restrictions.
Although the privacy preservation rules defined in EasyData are focused on the \( k \)-anonymity model, these can be extended to other privacy models, such as probabilistic \( k \)-anonymity [47] or \( \varepsilon \)-differential privacy [11], without changing the core of the application. It would only be necessary to adjust the current set of sanitization options to a new set of PPDP methods that make it possible to achieve the requirements of a new privacy model. For instance, in an attempt to achieve probabilistic \( k \)-anonymity, the current sanitization options should be extended to generalization, microaggregation and rank swapping [42]. However, to achieve differentially private datasets, the sanitization option should be replaced by noise addition [41]. On the other hand, to provide protection against attribute disclosure (besides identity disclosure), and thus, offer a stronger privacy guarantee, EasyData could be adapted to combine \( k \)-anonymity with other privacy models, such as, \( l \)-diversity [31] or \( t \)-closeness [30].
5.4 Threats to Validity and Limitations
A major concern of the EasyData white-box approach is that the described LDAA extension strategy is deeply integrated with the internal data model and logic of the legacy web application. This tight coupling eliminates the separation of concerns in a separate linked data layer and implies that changes to the inner workings of the web application may affect the EasyData plugin implementation. In practical terms, this can make maintenance more difficult if the original web application source code is not under the control of the LDAA extension developer. For instance, in the Redmine example it is not straightforward to migrate to new Redmine versions without breaking the EasyData plugin. Black-box approaches, on the other hand, would only require the web application to provide a stable API, which is not always easy though. In this vein, the Hydra approach improves the decoupling of linked data consumer and provider by means of a core vocabulary that can be used to describe and document generic web APIs. The EasyData approach should follow a similar approach to be more general.
Compared to other solutions as Datalift, EasyData does not provide a powerful interlinking technique to map heterogeneous metadata from different web sources. The links to external vocabularies and resources must be discovered by means of an external interlinking tool, and then used to modify the RDF model generated by EasyData. This flexible approach enables to evolve the interlinking result without parsing the model again, but we must rely on an interlinking tool to ensure completeness of the RDF model.
6 Conclusion
The EasyData approach presented in this paper makes it flexible to implement security and privacy properties in a legacy web application using linked data technologies. The LDAA extension approach can be practiced at diverse layers of the architectural components of a web application. In this paper, we have described the application to the controller layer of a regular MVC-based architecture. The EasyData privacy by design procedure is constrained to MVC-based web application architectures as well as the availability of source code. The LD model of a legacy web application can be disclosed and published with privacy preservation through any controller operation. The overall process is not based on adding middleware components, wrappers or adaptors, which can reduce the reliability and maintainability while increasing the complexity of the overall software architecture. A number of internal configurations can make it also possible to prepare for interlinking and alignment of the legacy application data model with external RDF sources. Configuring the application with external models and schemata beyond the legacy application data model is highly recommendable to interlink heterogeneous entities in the Web of data. As a future work, the EasyData implementation is planned to be augmented to connect the generated LDAA extensions with existing interlinking tools.
Overall, our results are subject to the scope of MVC-based application architectures, the use of linked data development frameworks and the implementation of non-functional confidentiality and privacy restrictions. Other web architectures, development technologies and intended non-functional properties should be further evaluated, though EasyData is a promising LDAA extension approach for other realms.
Appendix
Public Evaluation Data
The SonarQube analysis on all the tools and frameworks of this paper are publicly available in sonarcloud.io\(^7\). The analysis was executed with SonarQube scanner\(^8\). To extract the relevant metrics for this paper, the sonarcloud.io Web API\(^9\) was used. A JSON output is obtained by means of simple scripts like that of Figure 7. Then the JSON output is converted to CSV\(^10\) to do the analysis on a regular spreadsheet.
All the software modules and packages that are included in the analysis for the more complex linked data frameworks are listed in Figure 8 (Stanbol).
---
\(^7\)https://sonarcloud.io/organizations/dodero-github/projects
\(^8\)https://docs.sonarqube.org/display/SCAN/Analyzing+Source+Code
\(^9\)https://sonarcloud.io/web_api/
\(^10\)https://konklone.io/json/
Figure 9 Datalift modules included in the source code analysis
and Figure 9 (Datalift). As for the simpler tools, such as Triplify, HydraBundle and EasyData, all modules were included in the analysis. In the D2Rq case, all modules were included except src/de/fuberlin/wiwiss/d2rq/server.
Acknowledgements
This work was supported by the Spanish Ministry of Economy, Industry and Competitiveness under grants with ref. TIN2017-85797-R (VISAIGLE project) and TIN2016-80250-R (Sec-MCloud project).
List of Abbreviations
API Application Programming Interface
CC Cyclomatic Complexity
CCD Cyclomatic Complexity Density
IS Information Science
LD Linked Data
LDAA Linked Data Application Architecture
LOC Lines Of Code
LOD Linked Open Data
MVC Model-View-Controller
ORM Object-Relational Mapping
PII Personally Identifiable Information
PbD Privacy by Design
PPDP Privacy-Preserving Data Publishing
QI Quasi-Identifier
RBAC Role-Based Access Control
SbD Security by Design
SPM Software Process Management
TD Technical Debt
References
Privacy-Preserving Reengineering of MVC Application Architectures Using LD
Biographies
Juan Manuel Dodero obtained the BSc and MSc degrees in computer science from the Polytechnic University of Madrid, and a PhD in computer science and engineering from the Carlos III University of Madrid. He has been a Research & Development Engineer in a number of ICT companies. He is currently a Full Professor with the Department of Informatics and Engineering of the University of Cádiz, Spain. His current research interests are related with creative computing, technology-enhanced learning and computational thinking.
Mercedes Rodriguez-Garcia received a BSc degree in computer science from the University of Cádiz, Spain, a MSc degree in ICT security from the Open University of Catalonia, Spain, and a PhD degree in computer science and mathematics of security from the Rovira i Virgili University, Tarragona, Spain. She is currently an Assistant Lecturer with the Department of Automation Engineering, Electronics and Computer Architecture of the University of Cádiz, Spain. Her research and teaching interests include data privacy, computer network security, and reverse engineering and secure architectures.
Iván Ruiz-Rube received his MSc degree in software engineering from the University of Seville and a PhD degree in computer science from the University of Cádiz. He has been a Development Engineer with Everis and Sadiel ICT consulting companies. He is currently an assistant lecturer with the University of Cádiz, Spain. His fields of research are software process improvement, linked open data and technology-enhanced learning.
Manuel Palomo-Duarte received the MSc degree in computer science from the University of Seville, Spain and the PhD degree from the University of Cádiz, Spain. Since 2005 he works as a lecturer in the University of Cádiz. He is the author of three book chapters, 20 papers published in indexed journals and more than 30 contributions to international academic conferences. His main research interests are learning technologies and collaborative web. He was a board member in Wikimedia Spain from 2012 to 2016.
|
{"Source-Url": "https://www.riverpublishers.com/journal/journal_articles/RP_Journal_1540-9589_1875.pdf", "len_cl100k_base": 11895, "olmocr-version": "0.1.53", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 70642, "total-output-tokens": 16370, "length": "2e13", "weborganizer": {"__label__adult": 0.00035500526428222656, "__label__art_design": 0.0004608631134033203, "__label__crime_law": 0.0005245208740234375, "__label__education_jobs": 0.0014009475708007812, "__label__entertainment": 5.561113357543945e-05, "__label__fashion_beauty": 0.00015366077423095703, "__label__finance_business": 0.00026488304138183594, "__label__food_dining": 0.0002429485321044922, "__label__games": 0.0004682540893554687, "__label__hardware": 0.0005660057067871094, "__label__health": 0.0003867149353027344, "__label__history": 0.00025773048400878906, "__label__home_hobbies": 8.291006088256836e-05, "__label__industrial": 0.00027179718017578125, "__label__literature": 0.00028443336486816406, "__label__politics": 0.00024580955505371094, "__label__religion": 0.00029659271240234375, "__label__science_tech": 0.021575927734375, "__label__social_life": 0.0001041889190673828, "__label__software": 0.00974273681640625, "__label__software_dev": 0.96142578125, "__label__sports_fitness": 0.00017440319061279297, "__label__transportation": 0.0003862380981445313, "__label__travel": 0.00015616416931152344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64101, 0.04365]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64101, 0.39304]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64101, 0.86078]], "google_gemma-3-12b-it_contains_pii": [[0, 1603, false], [1603, 4263, null], [4263, 7118, null], [7118, 9551, null], [9551, 12270, null], [12270, 15215, null], [15215, 17888, null], [17888, 20435, null], [20435, 21202, null], [21202, 23341, null], [23341, 24347, null], [24347, 25637, null], [25637, 28226, null], [28226, 30426, null], [30426, 32033, null], [32033, 34504, null], [34504, 34617, null], [34617, 35591, null], [35591, 37797, null], [37797, 40255, null], [40255, 42623, null], [42623, 45102, null], [45102, 47760, null], [47760, 50074, null], [50074, 50899, null], [50899, 51397, null], [51397, 52901, null], [52901, 55172, null], [55172, 57562, null], [57562, 59846, null], [59846, 62031, null], [62031, 63163, null], [63163, 64101, null], [64101, 64101, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1603, true], [1603, 4263, null], [4263, 7118, null], [7118, 9551, null], [9551, 12270, null], [12270, 15215, null], [15215, 17888, null], [17888, 20435, null], [20435, 21202, null], [21202, 23341, null], [23341, 24347, null], [24347, 25637, null], [25637, 28226, null], [28226, 30426, null], [30426, 32033, null], [32033, 34504, null], [34504, 34617, null], [34617, 35591, null], [35591, 37797, null], [37797, 40255, null], [40255, 42623, null], [42623, 45102, null], [45102, 47760, null], [47760, 50074, null], [50074, 50899, null], [50899, 51397, null], [51397, 52901, null], [52901, 55172, null], [55172, 57562, null], [57562, 59846, null], [59846, 62031, null], [62031, 63163, null], [63163, 64101, null], [64101, 64101, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64101, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64101, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64101, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64101, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64101, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64101, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64101, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64101, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64101, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64101, null]], "pdf_page_numbers": [[0, 1603, 1], [1603, 4263, 2], [4263, 7118, 3], [7118, 9551, 4], [9551, 12270, 5], [12270, 15215, 6], [15215, 17888, 7], [17888, 20435, 8], [20435, 21202, 9], [21202, 23341, 10], [23341, 24347, 11], [24347, 25637, 12], [25637, 28226, 13], [28226, 30426, 14], [30426, 32033, 15], [32033, 34504, 16], [34504, 34617, 17], [34617, 35591, 18], [35591, 37797, 19], [37797, 40255, 20], [40255, 42623, 21], [42623, 45102, 22], [45102, 47760, 23], [47760, 50074, 24], [50074, 50899, 25], [50899, 51397, 26], [51397, 52901, 27], [52901, 55172, 28], [55172, 57562, 29], [57562, 59846, 30], [59846, 62031, 31], [62031, 63163, 32], [63163, 64101, 33], [64101, 64101, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64101, 0.12162]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
9e59a1b4d9487962522a9db4e1cae4a8eb5efcb0
|
Strategic Port Graph Rewriting: An Interactive Modelling and Analysis Framework
Maribel Fernandez, Hélène Kirchner, Bruno Pinaud
To cite this version:
Maribel Fernandez, Hélène Kirchner, Bruno Pinaud. Strategic Port Graph Rewriting: An Interactive Modelling and Analysis Framework. Dragan Bošnački; Stefan Edelkamp; Alberto Lluch Lafuente; Anton Wijs. 3rd Workshop on GRAPH Inspection and Traversal Engineering, Apr 2014, Grenoble, France. 159, pp.15–29, 2014, <10.4204/EPTCS.159.3>. <hal-00954546v3>
Strategic Port Graph Rewriting:
An Interactive Modelling and Analysis Framework
Maribel Fernández
King’s College London, Department of Informatics, Strand, London WC2R 2LS, UK
maribel.fernandez@kcl.ac.uk
Hélène Kirchner
Inria, Domaine de Voluceau, Rocquencourt BP 105, 78153 Le Chesnay Cedex, France
helene.kirchner@inria.fr
Bruno Pinaud
Bordeaux University, LaBRI CNRS UMR5800, 33405 Talence Cedex, France
bruno.pinaud@labri.fr
We present strategic portgraph rewriting as a basis for the implementation of visual modelling and analysis tools. The goal is to facilitate the specification, analysis and simulation of complex systems, using port graphs. A system is represented by an initial graph and a collection of graph rewriting rules, together with a user-defined strategy to control the application of rules. The strategy language includes constructs to deal with graph traversal and management of rewriting positions in the graph. We give a small-step operational semantics for the language, and describe its implementation in the graph transformation and visualisation tool PORGY.
Keywords: portgraph, graph rewriting, strategies, simulation, analysis, visual environment
1 Introduction
In this paper we present strategic portgraph rewriting as a basis for the design of PORGY – a visual, interactive environment for the specification, debugging, simulation and analysis of complex systems. PORGY has a graphical interface [23] and an executable specification language (see Fig. 1), where a system is modelled as a portgraph together with portgraph rewriting rules defining its dynamics (Sect. 2).
Reduction strategies define which (sub)expression(s) should be selected for evaluation and which rule(s) should be applied (see [19] 8 for general definitions). They are present in programming languages such as Clean [24], Curry [17], and Haskell [18] and can be explicitly defined to rewrite terms in languages such as ELAN [7], Stratego [31], Maude [21] or Tom [4]. They are also present in graph transformation tools such as PROGRES [29], AGG [12], Fujaba [22], GROOVE [28], GrGen [14] and GP [27]. PORGY’s strategy language draws inspiration from these previous works, but a distinctive feature is that it allows users to define strategies using not only operators to combine graph rewriting rules but also operators to define the location in the target graph where rules should, or should not, apply.
The main contribution of this paper is the definition of a strategic graph program (Sect. 3): it consists of an initial located graph (that is, a portgraph with two distinguished subgraphs $P$ and $Q$ specifying the position where rewriting should take place, and the subgraph where rewriting is banned, respectively), and a set of rewrite rules describing its dynamic behaviour, controlled by a strategy. We formalise the
*Partially supported by the French National Research Agency project EVIDEN (ANR 2010 JCJC 0201 01).
Strategic portgraph rewriting
Figure 1: Overview of PORGY: (1) editing one state of the graph being rewritten; (2) editing a rule; (3) all available rewriting rules; (4) portion of the derivation tree, a complete trace of the computing history; (5) the strategy editor.
concept of strategic graph program, showing how located graphs generalise the notion of a term with a rewrite position, and provide a small-step operational semantics for strategic graph programs (Sect. [4]).
Strategies are used to control PORGY’s rewrite engine: users can create graph rewriting derivations and specify graph traversals using the language primitives to select rewriting rules and the position where the rules apply. A rewriting position is a subgraph, which can be interactively selected (in a visual way), or can be specified using a focusing expression. Alternatively, rewrite positions could be encoded in the rewrite rules using markers or conditions [27]. We prefer to deal with positions directly, following Dijkstra’s separation of concerns principle [11].
PORGY and its strategy language were first presented in [1, 13]. Unlike those papers, the notion of portgraph considered in this paper includes attributes for nodes, ports and also edges, which are taken into account in the definition of portgraph morphism. In addition, the strategy language includes a sublanguage to deal with properties of graphs, which facilitates the specification of rewrite positions and banned subgraphs (to be protected during rewriting). The operational semantics of the language, including the non-deterministic sublanguage, is formally defined using a transition system.
2 Port Graph Rewriting
Several definitions of graph rewriting are available, using different kinds of graphs and rewriting rules (see, for instance, [10, 15, 5, 26, 6, 20]). In this paper we consider port graphs with attributes associated to nodes, ports and edges, generalising the notion of port graph introduced in [2].
Intuitively, a port graph is a graph where nodes have explicit connection points called ports; edges are attached to ports. Nodes, ports and edges are labelled and may have attributes. For instance, a port may be associated to a state (e.g., active/inactive or principal/auxiliary) and a node may have properties such as colour, shape, label, etc. Attributes may be used to define the behaviour of the modelled system and for visualisation purposes (as illustrated later).
Port Graph with Attributes. A labelled port graph with attributes is a tuple \( G = (V_G, lv_G, E_G, le_G) \) where:
- \( V_G \) is a finite set of nodes.
- \( lv_G \) is a function that returns, for each \( v \in V_G \) with \( n \) ports, a node label \( N \) (the node’s name), a set \( \{p_1, \ldots, p_n\} \) of port labels (each with its own set of attribute labels and values), and a set of attribute labels (each with a value). The node label determines the set of ports and attributes. Thus, we may write \( \text{Interface}(v) = \text{Interface}(N) = \{p_1, \ldots, p_n\} \).
- \( E_G \) is a finite set of edges; each edge has two attachment ports \( (v_1, p_1), (v_2, p_2) \), where \( v_i \in V_G, p_i \in \text{Interface}(v_i) \). Edges are undirected, so \( (v_1, p_1), (v_2, p_2) \) is an unordered pair, and two nodes may be connected by more than one edge on the same ports.
- \( le_G \) is a labelling function for edges, which returns for each \( e \in E_G \) an edge label, its attachment ports \( (v_1, p_1), (v_2, p_2) \) and its set of attribute labels, each with an associated value.
Variables may be used as labels for nodes, ports, attributes and values in rewrite rules.
Rewriting is defined using a notion of graph morphism:
Port Graph Morphism. Let \( G \) and \( H \) be two port graphs, where \( G \) may contain variables but \( H \) does not. A port graph morphism \( f : G \rightarrow H \) maps nodes, ports, edges with their respective attributes and values from \( G \) to \( H \), such that all non-variable labels are preserved, the attachment of edges is preserved and the set of pairs of attributes and values for nodes, ports and edges are also preserved. If \( G \) contains variable labels, the morphism must instantiate the variables. Intuitively, the morphism identifies a subgraph of \( H \) that is equal to \( G \) except for variable occurrences.
Port Graph Rewrite Rule. Port graphs are transformed by applying port graph rewriting rules. A port graph rewrite rule \( L \Rightarrow R \) can itself be seen as a port graph, consisting of two port graphs \( L \) and \( R \) called the left- and right-hand side respectively, and one special node \( \Rightarrow \), called arrow node. The left-hand side of the rule, also called pattern, is used to identify subgraphs in a given graph, which are then replaced by the right-hand side of the rule. The arrow node describes the way the new subgraph should be linked to the remaining part of the graph, to avoid dangling edges during rewriting.
Derivation. Given a finite set \( \mathcal{R} \) of rules, a port graph \( G \) rewrites to \( G' \), denoted by \( G \rightarrow_{\mathcal{R}} G' \), if there is a rule \( r = L \Rightarrow R \) in \( \mathcal{R} \) and a morphism \( g \) from \( L \) to \( G \), such that \( G \rightarrow_{\mathcal{R}}^g G' \), that is, \( G' \) is obtained by replacing \( g(L) \) by \( g(R) \) in \( G \) and rewiring \( g(R) \) as specified by \( r \)’s arrow node. This induces a reflexive and transitive relation on port graphs, called the rewriting relation, denoted by \( \rightarrow_{\mathcal{R}} \). Each rule application is a rewriting step and a derivation, or computation, is a sequence of rewriting steps.
Derivation Tree. Given a port graph \( G \) and a set of port graph rewrite rules \( \mathcal{R} \), the derivation tree of \( G \), written \( DT(G, \mathcal{R}) \), is a labelled tree such that the root is labelled by the initial port graph \( G \), and its children are the roots of the derivation trees \( DT(G_i, \mathcal{R}) \) such that \( G \rightarrow_{\mathcal{R}} G_i \). The edges of the derivation tree are labelled with the rewrite rule and the morphism used in the corresponding rewrite step. We will use strategies to specify the rewrite derivations of interest.
3 Strategic graph programs
Located graph. A located graph $G_P^Q$ consists of a port graph $G$ and two distinguished subgraphs $P$ and $Q$ of $G$, called respectively the position subgraph, or simply position, and the banned subgraph.
In a located graph $G_P^Q$, $P$ represents the subgraph of $G$ where rewriting steps may take place (i.e., $P$ is the focus of the rewriting) and $Q$ represents the subgraph of $G$ where rewriting steps are forbidden. We give a precise definition below; the intuition is that subgraphs of $G$ that overlap with $P$ may be rewritten, if they are outside $Q$. The subgraph $P$ generalises the notion of rewrite position in a term: if $G$ is the tree representation of a term $t$ then we recover the usual notion of rewrite position $p$ in $t$ by setting $P$ to be the node at position $p$ in the tree $G$, and $Q$ to be the part of the tree above $P$ (to force the rewriting step to apply at $p$, i.e., downwards from the node $P$).
When applying a port graph rewrite rule, not only the underlying graph $G$ but also the position and banned subgraphs may change. A located rewrite rule, defined below, specifies two disjoint subgraphs $M$ and $N$ of the right-hand side $R$ that are used to update the position and banned subgraphs, respectively. If $M$ (resp. $N$) is not specified, $R$ (resp. the empty graph $\emptyset$) is used as default. Below, we use the operators $\cup$, $\cap$, and $\setminus$ to denote union, intersection and complement of port graphs. These operators are defined in the natural way on port graphs considered as sets of nodes, ports and edges.
Located rewrite rule. A located rewrite rule is given by a port graph rewrite rule $L \Rightarrow R$, and optionally a subgraph $W$ of $L$ and two disjoint subgraphs $M$ and $N$ of $R$. It is denoted $L_W \Rightarrow R^N_M$. We write $G_P^Q \Rightarrow_{L_W=R^N_M}^g G_P^Q$ and say that the located graph $G_P^Q$ rewrites to $G_P^Q$ using $L_W \Rightarrow R^N_M$ at position $P$ avoiding $Q$, if $G \rightarrow_{L=W=R} G'$ with a morphism $g$ such that $g(L) \cap P = g(W)$ or simply $g(L) \cap P \neq \emptyset$ if $W$ is not provided, and $g(L) \cap Q = \emptyset$. The new position subgraph $P'$ and banned subgraph $Q'$ in $G'$ are defined as $P' = (P \setminus g(L)) \cup g(M)$, $Q' = Q \cup g(N)$; if $M$ (resp. $N$) are not provided then we assume $M = R$ (resp. $N = \emptyset$).
In general, for a given located rule $L_W \Rightarrow R_M^N$ and located graph $G_P^Q$, more than one morphism $g$, such that $g(L) \cap P = g(W)$ and $g(L) \cap Q = \emptyset$, may exist (i.e., several rewriting steps at $P$ avoiding $Q$ may be possible). Thus, the application of the rule at $P$ avoiding $Q$ produces a set of located graphs.
To control the application of rewriting rules, we introduce a strategy language whose syntax is shown in Table [1]. Strategy expressions are generated by the grammar rules from the non-terminal $S$. A strategy expression combines applications of located rewrite rules, generated by the non-terminal $A$, and position updates, using the non-terminal $U$ with focusing expressions generated by $F$. The application constructs and some of the strategy constructs are strongly inspired by term rewriting languages such as ELAN [7], Stratego [31] and Tom [4]. Focusing operators are not present in term rewriting languages where the implicit assumption is that the rewrite position is defined by traversing the term from the root downwards.
The syntax presented here extends the one in [13] by including a language to define subgraphs of a given graph by selecting nodes that satisfy some simple properties (see Table [2]).
The focusing constructs are a distinctive feature of our language. They are used to define positions for rewriting in a graph, or to define positions where rewriting is not allowed. They denote functions used in strategy expressions to change the positions $P$ and $Q$ in the current located graph (e.g., to specify graph traversals). We describe them briefly below.
- **CrtGraph**, CrtPos and CrtBan, applied to a located graph $G_P^Q$, return respectively the whole graph $G$, $P$ and $Q$.
- **AllNgb**, OneNgb and NextNgb denote functions that apply to pairs consisting of a located graph $G_P^Q$ and a subgraph $G'$ of $G$. If Pos is an expression denoting a subgraph $G'$ of the current graph
Let $L, R$ be port graphs; $M, N$ positions; $n \in \mathbb{N}$; $p_i = \ldots = p_n \in [0, 1]$; $\sum_{i=1}^{n} p_i = 1$
(Stategies) \[ S ::= \quad A \mid U \mid S \mid S \mid \text{repeat}(S) \mid \text{while}(S)\text{do}(S) \]
| (S)orelse(S) | if(S)then(S)else(S) | ppick(S_1, p_1, \ldots, S_n, p_n) \]
(Applications) \[ A ::= \quad \text{Id} \mid \text{Fail} \mid \text{all}(T) \mid \text{one}(T) \]
(Transformations) \[ T ::= \quad L_W \Rightarrow R_M \]
(Position Update) \[ U ::= \quad \text{setPos}(F) \mid \text{setBan}(F) \mid \text{isEmpty}(F) \]
(Focusing) \[ F ::= \quad \text{CrtGraph} \mid \text{CrtPos} \mid \text{CrtBan} \mid \text{AllNgb}(F) \]
| \text{OneNgb}(F) \mid \text{NextNgb}(F) \mid \text{Property}(\rho, F) \]
| \text{F} \cup \text{F} \mid \text{F} \cap \text{F} \mid \text{F} \setminus \text{F} \mid \emptyset \]
Table 1: Syntax of the strategy language.
Let \textit{attribute} be an attribute label; \textit{a} a valid value for the given attribute label;
\textit{function-name} the name of a built-in or user-defined function.
(Properties) \[ \rho ::= \quad (\text{Elem}, \text{Expr}) \mid (\text{Function}, \text{function-name}) \]
\textit{Elem} ::= \quad \text{Node} \mid \text{Edge} \mid \text{Port} \]
\textit{Expr} ::= \quad \text{Label} == \text{a} \mid \text{Label} != \text{a} \mid \text{attribute Relop attribute} \]
| \text{attribute Relop a} \]
\textit{Relop} ::= \quad == \mid != \mid > \mid < \mid >= \mid <= \]
Table 2: Syntax of the Property Language.
$G$, then AllNgb(Pos) is the subgraph of $G$ consisting of all immediate successors of the nodes in $G'$, where an immediate successor of a node $v$ is a node that has a port connected to a port of $v$. OneNgb(Pos) returns a subgraph of $G$ consisting of one randomly chosen node which is an immediate successor of a node in $G'$. NextNgb(Pos) computes all successors of nodes in $G'$ using for each node only the subset of its ports labelled “next” (so NextNgb(Pos) returns a subset of the nodes returned by AllNgb(Pos)).
- Property$(\rho, F)$ is used to select a subgraph of a given graph, satisfying a certain property, specified by $\rho$. It can be seen as a filtering construct: if the focusing expression $F$ generates a subgraph $G'$ then Property$(\rho, F)$ returns a subgraph containing only the nodes and edges from $G'$ that satisfy the decidable property $\rho$. It typically tests a property on nodes, ports, or edges, allowing us for instance to select the subgraph of nodes with active ports: Property($(\text{Port}, \text{Active} == \text{true}), F$). It is also possible to specify a function to be used to compute the subgraph: Property($(\text{Function}, \text{Root}), \text{CrtGraph}$) uses the built-in (or user-defined)
function Root to compute a specific subgraph from the current graph.
- ∪, ∩ and \ are union, intersection and complement of port graphs which may be used to combine multiple Property operators; ∅ denotes the empty graph.
Other operators can be derived from the language constructs. A useful example is the not construct:
- not(S) ≜ if(S) then(Fail) else(Id). It fails if S succeeds and succeeds if S fails.
**Strategic graph program** A strategic graph program consists of a finite set of located rewrite rules \( \mathcal{R} \), a strategy expression \( S \) (built with \( \mathcal{R} \) using the grammar in Table 1) and a located graph \( G^O_P \). We denote it \([S, G^O_P]\), or simply \([S, G_P]\) when \( \mathcal{R} \) is clear from the context.
### 4 Semantics of strategic graph programs
Intuitively, a strategic program consists of an initial port graph, together with a set of rules that will be used to reduce it, following the given strategy. Formally, the semantics of a strategic graph program \([S, G^O_P]\) is specified using a transition system (that is, a set of configurations with a binary relation on configurations), defining a small step operational semantics in the style of [25].
**Definition** A configuration is a multiset \( \{O_1, \ldots, O_n\} \) where each \( O_i \) is a strategic graph program.
Given a strategic graph program \([S, G^O_P]\), we will define sequences of transitions according to the strategy \( S \), starting from the initial configuration \([S, G^O_P]\). A configuration is terminal if no transitions can be performed.
We will prove that all terminal configurations in our transition system consist of program values (or simply values, if there is no ambiguity), denoted by \( V \), of the form \([Id, G^O_P]\) or \([Fail, G^O_P]\). In other words, there are no blocked programs: the transition system ensures that, for any configuration, either there are transitions to perform, or we have reached values.
Below we provide the transition rules for the core sublanguage, that is, the sublanguage that does not include the non-deterministic operators one(), () or else(), ppick(), repeat() and OneNgb(). The non-deterministic sublanguage is presented in Sect. A of the Appendix.
**Transitions** The transition relation \( \rightarrow \) is a binary relation on configurations, defined as follows:
\[
\{O_1, \ldots, O_k, V_1, \ldots, V_j\} \rightarrow \{O'_1, \ldots, O'_{1m_1}, \ldots, O'_{km_k}, V_1, \ldots, V_j\}
\]
if \( O_i \rightarrow \{O'_{i1}, \ldots, O'_{im_i}\} \), for \( 1 \leq i \leq k \), where \( k \geq 1 \) and some of the \( O'_{ij} \) might be values.
The auxiliary relation \( \rightarrow \) is defined below using axioms and rules.
A configuration \( \{O_1, \ldots, O_k, V_1, \ldots, V_j\} \) is a multiset of graph programs: each element represents a node in the derivation tree generated by the initial graph program. The transition relation performs reductions in parallel at all the positions in the derivation tree where there is a reducible graph program.
**Definition** The transition relation \( \rightarrow \) on individual strategic graph programs is defined by induction.
There are no axioms/rules defining transitions for a program where the strategy is Id or Fail (these are terminal).
**Axioms for the operator** all:
\[
\text{all}(L_W \Rightarrow R^N_M) \cdot G^O_P \rightarrow \{[Id, G^O_{P_1}], \ldots, [Id, G^O_{P_k}]\}
\]
\[
\text{LS}_{\text{all}} : \Rightarrow R^N_M(G^O_P) = \{G^O_{P_1}, \ldots, G^O_{P_k}\}
\]
where $LS_{L_W \Rightarrow R_M^0}(G_P^0)$, the set of legal reducts of $G_P^0$ for $L_W \Rightarrow R_M^0$, or legal set for short, contains all the located graphs $G_{i_P}^0$ ($1 \leq i \leq k$) such that $G_P^0 \rightarrow_{g_i} G_{i_P}^0$ and $g_1, \ldots, g_k$ are pairwise different.
As the name of the operator indicates, all possible applications of the rule are considered in one step. The strategy fails if the rule is not applicable.
**Position Update and Focusing.** Next we give the semantics of the commands that are used to specify and update positions via focusing constructs. The focusing expressions generated by the grammar for the non terminal $F$ in Tab.1 have a functional semantics (see below). In other words, an expression $F$ denotes a function that applies to the current located graph, and computes a subgraph of $G$. Since there is no ambiguity, the function denoted by the expression $F$ is also called $F$.
\[
\begin{align*}
\text{setPos}(F), G_P^0 & \rightarrow \{\text{ld}, G_P^0\} & F(G_P^0) = P' \\
\text{setBan}(F), G_P^0 & \rightarrow \{\text{ld}, G_P^0\} & F(G_P^0) = Q' \\
\text{isEmpty}(F), G_P^0 & \rightarrow \{\text{ld}, G_P^0\} & F(G_P^0) = \emptyset \\
\text{isEmpty}(F), G_P^0 & \rightarrow \{\text{fail}, G_P^0\} & F(G_P^0) \neq \emptyset
\end{align*}
\]
\begin{align*}
\text{CrtGraph}(G_P^0) & = G & \text{CrtPos}(G_P^0) = P & \text{CrtBan}(G_P^0) = Q \\
\text{AllNgb}(F)(G_P^0) & = G' & \text{where } G' \text{ consists of all immediate successors of } F(G_P^0) \\
\text{NextNgb}(F)(G_P^0) & = G' & \text{where } G' \text{ consists of the immediate successors, via ports labelled "next", of nodes in } F(G_P^0) \\
\text{Property}(\rho, F)(G_P^0) & = G' & \text{where } G' \text{ consists of all nodes in } F(G_P^0) \text{ satisfying } \rho \\
(F_1 \ \text{op} \ F_2)(G_P^0) & = F_1(G_P^0) \ \text{op} \ F_2(G_P^0) & \text{where } \text{op} \text{ is } \cup, \cap, \backslash
\end{align*}
Note that with the semantics given above for setPos() and setBan(), it is possible for $P$ and $Q$ to have a non-empty intersection. A rewrite rule can still apply if the redex overlaps $P$ but not $Q$.
**Sequence.** The semantics of sequential application, written $S_1; S_2$, is defined by two axioms and a rule:
\[
\begin{align*}
\text{Id; } S_1, G_P^0 & \rightarrow \{S_1, G_P^0\} & \text{Fail; } S_1, G_P^0 & \rightarrow \{\text{Fail}, G_P^0\} \\
\text{S}_1, G_P^0 & \rightarrow \{S_1, G_{1_P}^0, \ldots, S_k, G_{k_P}^0\} \\
\text{S}_1; S_2, G_P^0 & \rightarrow \{S_1; S_2, G_{1_P}^0, \ldots, S_k; S_2, G_{k_P}^0\}
\end{align*}
\]
The rule for sequences ensures that $S_1$ is applied first.
Conditional. The behaviour of the strategy \( \text{if}(S_1)\text{then}(S_2)\text{else}(S_3) \) depends on the result of the strategy \( S_1 \). If \( S_1 \) succeeds on (a copy of) the current located graph, then \( S_2 \) is applied to the current graph, otherwise \( S_3 \) is applied.
\[
\exists G', M \ s.t. \ \{[S_1, G'^0_P] \rightarrow \{[\text{Id}, G'], M]\}
\]
\[
[\text{if}(S_1)\text{then}(S_2)\text{else}(S_3), G'^0_P] \rightarrow \{[S_2, G'^0_P]\}
\]
\[
\not\exists G', M \ s.t. \ \{[S_1, G'^0_P] \rightarrow \{[\text{Id}, G'], M]\}
\]
\[
[\text{if}(S_1)\text{then}(S_2)\text{else}(S_3), G'^0_P] \rightarrow \{[S_3, G'^0_P]\}
\]
While loop. Iteration is defined using a conditional as follows:
\[
[\text{while}(S_1)\text{do}(S_2), G'^0_P] \rightarrow \{[\text{if}(S_1)\text{then}(S_2;\text{while}(S_1)\text{do}(S_2))\text{else}(\text{Id}), G'^0_P]\}
\]
Note that \( S_1 \) used as a condition in the two constructs above may produce some successes but also some failure results. To ensure a unique result, the strategy \( S_1 \) should terminate and be deterministic; the class \( \text{Cond} \) of strategies generated by the following grammar satisfies these conditions:
\[
\text{Cond} ::= \text{Cond}; \text{Cond} | \text{Id} | \text{Fail} | \text{all}(T) | \text{isEmpty}(F) | \text{not}(\text{Cond})
\]
where \( F \) should also be deterministic:
\[
F ::= \text{AllNgb}(F) | \text{NextNgb}(F) | \text{Property}(\rho,F) | \cup | \cap | \setminus | 0
\]
However, using non-deterministic constructs in the condition is not necessarily unsafe: if \( R \) is a located rule, we could, for instance, write \( \text{if}(\text{one}(R))\text{then}(S_2)\text{else}(S_3) \) to perform either \( S_2 \) or \( S_3 \), depending on whether \( R \) is applicable at the current position or not.
5 Examples
Using focusing (specifically the Property construct), we can create concise strategies that perform traversals. In this way, we can for instance switch between outermost and innermost term rewriting (on trees). This is standard in term-based languages such as ELAN or Stratego; here we can also define traversals in graphs that are not trees. More examples can be found in [1, 23, 13].
The following strategy allows us to check if a graph is connected using a standard connectivity test. Assuming that all nodes of the initial graph have the Boolean attribute state set to false, we just need one rewriting rule, which simply sets state to true on a node. We start with the strategy pick-one-node to randomly select a node \( n \) as a starting point. Then, the rule is applied to all neighbours of \( n \). When the rule cannot be applied any longer, the position subgraph is set to all neighbours of the previously used nodes which still have state set to false (visit-neighbours-at-any-distance). The strategy continues until the position subgraph is empty. If the rule can still be applied somewhere in the graph, there is a failure (check-all-nodes-visited). Note the use of attributes and focusing constructs to traverse the graph. Below
\footnote{Working examples can be downloaded from \url{http://tulip.labri.fr/TulipDrupal/?q=porgy}}
the strategy $R$ is an abbreviation for one($R$).
```plaintext
pick-one-node: setPos(CrtGraph);
one($R$);
setPos(Property((Node,state == true),CrtGraph));
visit-neighbours-at-any-distance: setPos(AllNgb(CrtPos));
while(not(isEmpty(CrtPos)))do(
if($R$)then($R$)else(
setPos(AllNgb(CrtPos)\Property((Node,state == true),CrtGraph)));
check-all-nodes-visited: setPos(CrtGraph);
not($R$)
```
6 Properties
A strategic graph program $[S, G^Q_P]$ is terminating if there is no infinite transition sequence from the initial configuration $\{[S, G^Q_P]\}$. It is weakly terminating if a configuration having at least one program value can be reached.
For the core part of the language (that is, excluding the constructs `pick()`, `orelse()`, `repeat()`, `one()`, and `OneNgb()`), strategic graph programs have at most one terminal configuration (none if the program is non-terminating).
Result set. Given a strategic graph program $[S, G^Q_P]$, we can associate a set of results to each sequence of transitions out of the initial configuration $\{[S, G^Q_P]\}$: the result set associated to a sequence of transitions is the set of values in the configurations in the sequence. If the sequence of transitions out of the initial configuration $\{[S, G^Q_P]\}$ ends in a terminal configuration then the result set of the sequence is a program result. If a strategic graph program does not reach a terminal configuration (in case of non-termination) then the program result is undefined ($\bot$).
Note that there may be more than one derivation out of the initial configuration $\{[S, G^Q_P]\}$ ending in a terminal configuration. However, if we exclude the non-deterministic constructs, we can prove that each strategic graph program has at most one program result, which is a set of program values (Prop. 6.4).
Graph programs are not terminating in general, however we can identify a terminating sublanguage (i.e. a sublanguage for which the transition relation is terminating). We can also characterise the terminal configurations. The next lemma is useful for the terminal proof:
**Lemma 6.1** If $[S_1, G^Q_P]$ is terminating and $S_2$ is such that $[S_2, G^Q_P]$ is terminating for any $G^Q_P$, then $[S_1; S_2, G^Q_P]$ is terminating.
**Property 6.2 (Termination)** The sublanguage that excludes the while() and repeat() constructs is terminating.
**Property 6.3 (Progress: Characterisation of Terminal Configurations)** For every strategic graph program $[S, G^Q_P]$ that is not a value (i.e., $S \neq \text{Id}$ and $S \neq \text{Fail}$), there exists a configuration $C$ such that $\{[S, G^Q_P]\} \rightarrow C$.
Strategic portgraph rewriting
Proof By induction on $S$. According to definition of transitions in Sect. 4 and its probabilistic extension, for every strategic graph program $[S, G_P^Q]$ different from $[\text{Id}, G_P^Q]$ or $[\text{Fail}, G_P^Q]$, there is an axiom or rule that applies (it suffices to check all the cases in the grammar for $S$).
The language contains non-deterministic operators in each of its syntactic categories: OneNgb() for Position Update, one() for Applications and ppick(), ()orelse() and repeat() for Strategies. For the sublanguage that excludes them, we have the property:
Property 6.4 (Result Set) Each strategic graph program in the sublanguage that excludes OneNgb(), one(), ppick(), ()orelse() and repeat() has at most one program result.
Proof If we exclude those constructs, the transition system is deterministic, so there is at most one derivation out of any given graph program. Hence there is at most one program result.
With respect to the computation power of the language, it is easy to state the Turing completeness property. The proof is similar to that in [16].
Property 6.5 (Completeness) The set of all strategic graph programs $[S, G_P^Q]$ is Turing complete, i.e. can simulate any Turing machine.
7 Implementation
PORGY is implemented on top of the visualisation framework Tulip [3] as a set of Tulip plugins. The strategy language is one of these plugins. A version of Tulip bundled with PORGY can be downloaded from http://tulip.labri.fr/TulipDrupal/?q=porgy.
Our first challenge was to implement port graphs, because Tulip only supports nodes and edges from a graph theory point of view. We had to develop an abstract layer on top of the Tulip graph library to be able to easily work with portgraph.
When applying a rule $L \Rightarrow R$ on a graph $G$, the first operation is to compute the morphism between the left-hand side $L$ and $G$. This problem, known as the graph-subgraph isomorphism, still receives great attention from the community. We have implemented Ullman’s original algorithm [30] because its implementation is straightforward and it is used as a reference in many papers.
The derivation tree is implemented with the help of metanodes (a node which represents a graph) and quotient graph functionalities of Tulip (a graph of metanodes). Each node of the derivation tree represents a graph $G$, except red nodes which represents failure (Fail). Inside each node, the user sees an interactive drawing of the graph (see panel 4 of Fig. 1). See [23] for more details about the interactive features of PORGY and how we implemented them.
The strategy plugin is developed with the Spirit C++ library from Boost[2]. This plugin works as a compiler: its inputs are a strategy defined as a text string and the Tulip graph datastructure, the output are low-level Tulip graph operations. Boost (precisely its Random library) is also used to generate the random numbers needed for the probabilistic operators. For instance, we use a non-uniform generator for ppick() to be able to choose a strategy following the given probabilities.
8 Conclusion
The strategy language defined in this paper is part of PORGY, an environment for visual modelling and analysis of complex systems through port graphs and port graph rewrite rules. It also offers a visual representation of rewriting traces as a derivation tree. The strategy language is used in particular to guide the construction of this derivation tree. The implementation uses the small-step operational semantics of the language. Some of these steps require a copy of the strategic graph program; this is done efficiently in PORGY thanks to the cloning functionalities of the underlying TULIP system [3]. Verification and debugging tools for avoiding conflicting rules or non-termination are planned for future work.
References
A Appendix – Probabilistic extension
To define the semantics of the non-deterministic and probabilistic constructs in the language we will generalise the transition relation. We denote by $\rightarrow_p$ a transition step with probability $p$. The relation $\rightarrow$ defined in Sect. 3 can be seen as a particular case where $p = 1$, that is, $\rightarrow$ corresponds to $\rightarrow_1$.
The relation $\rightarrow$ also becomes probabilistic:
$$\{O_1, \ldots, O_k, V_1, \ldots, V_j\} \rightarrow_p \{O'_1, \ldots, O'_{i_m}, \ldots, O'_{k_m}, V_1, \ldots, V_j\}$$
if $O_i \rightarrow_{p_i} \{O'_{1_i}, \ldots, O'_{i_m_i}\}$, for $1 \leq i \leq k$ (where $k \geq 1$ and some of the $O'_{ij}$ might be values) and $p = p_1 \times \ldots \times p_k$.
We write $M \rightarrow_p^* M'$ if there is a sequence of transitions $\rightarrow_p$ from configuration $M$ to $M'$, such that the product of probabilities is $p$. We are now ready to define the semantics of the remaining constructs in the strategy language.
**Probabilistic Choice of Strategy:**
$$[\text{pick}(S_1, p_1, \ldots, S_n, p_n), G_P] \rightarrow_p \{[S_j, G_P']\}$$
**Non-deterministic Choice of Reduct:** The non-deterministic one() operator takes as argument a rule. It randomly selects only one amongst the set of legal reducts $LS_{\rightarrow_p^*}(G_P)$. Since all of them have the same probability of being selected, in the axiom below $p = 1/|LS_{\rightarrow_p^*}(G_P)|$.
$$[\text{one}(L_W \Rightarrow R_M^N), G_P] \rightarrow_p \{[\text{Id}, G_P']\} \quad \text{if } G_P' \in LS_{\rightarrow_p^*}(G_P)$$
We write $L_W \Rightarrow R_M^N$.
$$[\text{one}(L_W \Rightarrow R_M^N), G_P] \rightarrow_1 \{[\text{Fail}, G_P']\} \quad \text{if } LS_{\rightarrow_p^*}(G_P) = \emptyset$$
**Priority choice:**
$$[S_1, G_P] \rightarrow_p^* \{[\text{Id}, G'], M\} \quad [S_1, G_P] \rightarrow_p^* \{[\text{Fail}, G'], M\}$$
$$[[S_1] \text{orelse}(S_2), G_P] \rightarrow_{p/n} \{[\text{Id}, G']\} \quad [[S_1] \text{orelse}(S_2), G_P] \rightarrow_p \{[S_2, G_P']\}$$
Here, $S_1$ is applied to $G_P$ and if with probability $p$ we reach a configuration with $n \geq 1$ successful leaves $[\text{Id}, G']$, then with probability $p/n$ there is a transition to one of the successful configurations $[\text{Id}, G']$. However, if with probability $p$ we reach a fail, then $S_2$ is applied to the initial graph with probability $p$ (we do not divide the probabilities in this case, since the transition does not depend on the choice of failure node). We assume that the implementation will take the shortest path $[S_1, G_P] \rightarrow_p^* \{[\text{Id}, G'], M\}$ that generates a success.
We chose to define $([S_1] \text{orelse}(S_2))$ as a primitive operator instead of encoding it as if($S_1$)then($S_1$)else($S_2$) since the language has non-deterministic operators: evaluating $S_1$ in the condition and afterwards in the “then” branch of the if-then-else could yield different values. The semantics given above ensures that if $S_1$ can succeed then it can be successfully applied.
**Iteration of a given strategy:**
The construction repeat(S) iterates the strategy S.
$$[S, G_P] \rightarrow_p^* \{[\text{Id}, G'], M\} \quad [S, G_P] \rightarrow_p^* \{[\text{Fail}, G'], M\}$$
$$[\text{repeat}(S), G_P] \rightarrow_{p/n} \{[\text{repeat}(S), G']\} \quad [\text{repeat}(S), G_P] \rightarrow_p \{[\text{Id}, G_P']\}$$
where again we assume that \( n \geq 1 \) is the number of successful leaves in the configuration \([\text{Id}, G'], M]\).
*Non-deterministic position update and focusing:*
The commands \( \text{setPos}(F) \), \( \text{setBan}(F) \) and \( \text{isEmpty}(F) \) are non-deterministic if the expression \( F \) is non-deterministic. The operator \( \text{OneNgb}(F) \) introduces non-determinism. The axioms are similar to the ones given in Section 4, but now the transitions are indexed by a probability.
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00954546/document", "len_cl100k_base": 10130, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 58588, "total-output-tokens": 13393, "length": "2e13", "weborganizer": {"__label__adult": 0.0003402233123779297, "__label__art_design": 0.00045609474182128906, "__label__crime_law": 0.0002617835998535156, "__label__education_jobs": 0.0006237030029296875, "__label__entertainment": 9.071826934814452e-05, "__label__fashion_beauty": 0.00014901161193847656, "__label__finance_business": 0.0002646446228027344, "__label__food_dining": 0.0003647804260253906, "__label__games": 0.0005507469177246094, "__label__hardware": 0.0006356239318847656, "__label__health": 0.0004513263702392578, "__label__history": 0.00026226043701171875, "__label__home_hobbies": 9.1552734375e-05, "__label__industrial": 0.0004260540008544922, "__label__literature": 0.00044083595275878906, "__label__politics": 0.0002608299255371094, "__label__religion": 0.0004830360412597656, "__label__science_tech": 0.032745361328125, "__label__social_life": 9.781122207641602e-05, "__label__software": 0.00679779052734375, "__label__software_dev": 0.953125, "__label__sports_fitness": 0.0002560615539550781, "__label__transportation": 0.0004825592041015625, "__label__travel": 0.00019037723541259768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43248, 0.02066]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43248, 0.45813]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43248, 0.819]], "google_gemma-3-12b-it_contains_pii": [[0, 502, false], [502, 3446, null], [3446, 5901, null], [5901, 9733, null], [9733, 14098, null], [14098, 16851, null], [16851, 20378, null], [20378, 23018, null], [23018, 26185, null], [26185, 28841, null], [28841, 32008, null], [32008, 35574, null], [35574, 39346, null], [39346, 42743, null], [42743, 43248, null]], "google_gemma-3-12b-it_is_public_document": [[0, 502, true], [502, 3446, null], [3446, 5901, null], [5901, 9733, null], [9733, 14098, null], [14098, 16851, null], [16851, 20378, null], [20378, 23018, null], [23018, 26185, null], [26185, 28841, null], [28841, 32008, null], [32008, 35574, null], [35574, 39346, null], [39346, 42743, null], [42743, 43248, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43248, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43248, null]], "pdf_page_numbers": [[0, 502, 1], [502, 3446, 2], [3446, 5901, 3], [5901, 9733, 4], [9733, 14098, 5], [14098, 16851, 6], [16851, 20378, 7], [20378, 23018, 8], [23018, 26185, 9], [26185, 28841, 10], [28841, 32008, 11], [32008, 35574, 12], [35574, 39346, 13], [39346, 42743, 14], [42743, 43248, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43248, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
892aef2c787dbece446146c2165daf1751a501ef
|
[REMOVED]
|
{"len_cl100k_base": 12884, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 48018, "total-output-tokens": 15114, "length": "2e13", "weborganizer": {"__label__adult": 0.00032806396484375, "__label__art_design": 0.0008511543273925781, "__label__crime_law": 0.0005927085876464844, "__label__education_jobs": 0.006244659423828125, "__label__entertainment": 0.0001970529556274414, "__label__fashion_beauty": 0.00024509429931640625, "__label__finance_business": 0.0072021484375, "__label__food_dining": 0.0004031658172607422, "__label__games": 0.000698089599609375, "__label__hardware": 0.0013589859008789062, "__label__health": 0.00064849853515625, "__label__history": 0.000644683837890625, "__label__home_hobbies": 0.00018596649169921875, "__label__industrial": 0.0011157989501953125, "__label__literature": 0.0006284713745117188, "__label__politics": 0.0005707740783691406, "__label__religion": 0.0004334449768066406, "__label__science_tech": 0.317626953125, "__label__social_life": 0.0002448558807373047, "__label__software": 0.1470947265625, "__label__software_dev": 0.51123046875, "__label__sports_fitness": 0.0002315044403076172, "__label__transportation": 0.0009317398071289062, "__label__travel": 0.0003077983856201172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69831, 0.01341]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69831, 0.33601]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69831, 0.92221]], "google_gemma-3-12b-it_contains_pii": [[0, 3234, false], [3234, 8765, null], [8765, 14030, null], [14030, 18554, null], [18554, 22130, null], [22130, 27768, null], [27768, 31635, null], [31635, 36796, null], [36796, 42012, null], [42012, 47394, null], [47394, 51394, null], [51394, 56907, null], [56907, 62356, null], [62356, 68761, null], [68761, 69831, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3234, true], [3234, 8765, null], [8765, 14030, null], [14030, 18554, null], [18554, 22130, null], [22130, 27768, null], [27768, 31635, null], [31635, 36796, null], [36796, 42012, null], [42012, 47394, null], [47394, 51394, null], [51394, 56907, null], [56907, 62356, null], [62356, 68761, null], [68761, 69831, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 69831, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69831, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69831, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69831, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69831, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69831, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69831, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69831, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69831, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69831, null]], "pdf_page_numbers": [[0, 3234, 1], [3234, 8765, 2], [8765, 14030, 3], [14030, 18554, 4], [18554, 22130, 5], [22130, 27768, 6], [27768, 31635, 7], [31635, 36796, 8], [36796, 42012, 9], [42012, 47394, 10], [47394, 51394, 11], [51394, 56907, 12], [56907, 62356, 13], [62356, 68761, 14], [68761, 69831, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69831, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
7a6bf9031f17a862ded3a4a659996e54a67aafa6
|
VENUS: Vertex-Centric Streamlined Graph Computation on a Single PC
Jiefeng Cheng$^1$, Qin Liu$^2$, Zhenguo Li$^1$, Wei Fan$^1$, John C.S. Lui$^2$, Cheng He$^1$
$^1$Huawei Noah’s Ark Lab, Hong Kong
$^2$The Chinese University of Hong Kong
{cheng.jiefeng,li.zhenguo,david.fanwei,hecheng}@huawei.com
{qliu,cslui}@cse.cuhk.edu.hk
Abstract—Recent studies show that disk-based graph computation on just a single PC can be as highly competitive as cluster-based computing systems on large-scale problems. Inspired by this remarkable progress, we develop VENUS, a disk-based graph computation system which is able to handle billion-scale problems efficiently on a commodity PC. VENUS adopts a novel computing architecture that features vertex-centric “streamlined” processing – the graph is sequentially loaded and the update functions are executed in parallel on the fly. VENUS deliberately avoids loading batch edge data by separating read-only structure data from mutable vertex data on disk. Furthermore, it minimizes random I/Os by caching vertex data in main memory. The streamlined processing is realized with efficient sequential scan over massive structure data and fast feeding a large number of update functions. Extensive evaluation on large real-world and synthetic graphs has demonstrated the efficiency of VENUS. For example, VENUS takes just 8 minutes with hard disk for PageRank on the Twitter graph with 1.5 billion edges. In contrast, Spark takes 8.1 minutes with 50 machines and 100 CPUs, and GraphChi takes 13 minutes using fast SSD drive.
I. INTRODUCTION
We are living in a “big data” era due to the dramatic advance made in the ability to collect and generate data from various sensors, devices, and the Internet. Consider the Internet data. The web pages indexed by Google were around one million in 1998, but quickly reached one billion in 2000 and have already exceeded one trillion in 2008. Facebook also achieved one billion users on October 4, 2012. It is of great interest to process, analyze, store, and understand these big datasets, in order to extract business value and derive new business model. However, researchers are facing significant challenges in processing these big datasets, due to the difficulties in managing these data with our current methodologies or data mining software tools.
Graph computing over distributed or single multi-core platform has emerged as a new framework for big data analytics, and it draws intensive interests recently [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11]. Notable systems include Pregel [1], GraphLab [2], and GraphChi [3]. They use a vertex-centric computation model, in which the user just needs to provide a simple update function to the system that is executed for each vertex in parallel [1], [2], [3]. These developments substantially advance our ability to analyze large-scale graph data, which cannot be efficiently handled by previous parallel abstractions such as MapReduce [12] due to sparse computation dependencies and iterative operations common in graph computation [3].
Distributed computing systems such as Spark [13], Pregel [1], PEGASUS [5], and GraphLab [2] can handle billion-scale graphs, but the cost of having and managing a large cluster is prohibitory for most users. On the other hand, disk-based single machine graph computing systems such as GraphChi [3], X-Stream [14], and TurboGraph [15] have shown great potential in big graph analytics. For example, running the PageRank algorithm on a Twitter graph of 1.5 billion edges, Spark needs 8.1 minutes with 50 machines (100 CPUs) [16], while GraphChi only spends 13 minutes with just a single MacMini of 8GB RAM and 256GB SSD drive; for the belief propagation algorithm on a Web graph of 6.7 billion edges, PEGASUS takes 22 minutes with 100 machines [5], while GraphChi uses 27 minutes on a single PC. These results suggest that disk-based graph computing on a single PC is not only highly competitive even compared to parallel processing over large clusters, but it is very affordable also.
In general, graph computation is performed by iteratively executing a large number of update functions. The disk-based approach organizes the graph data into a number of shards on disk, so that each shard can fit in the main memory. Each shard contains all needed information for computing updates of a number of vertices. One iteration will execute all shards. A central issue is how to manage the computing states of all shards to guarantee the correctness of processing, which includes loading graph data from disk to main memory, and synchronizing intermediate results to disk so that the latest updates are visible to subsequent computation. Therefore, there is a huge amount of data to be accessed per iteration, which can result in extensive I/Os and become a bottleneck of the disk-based approach. This generates great interests in new architectures for efficient disk-based graph computation.
The seminal work for disk-based graph computation is the GraphChi system [3]. GraphChi organizes a graph into a number of shards and processes each shard in turn. To execute a shard, the entire shard – its vertices and all of their incoming and outgoing edges – must be loaded into memory before processing. This constraint hinders the parallelism of computation and IO. In addition, after the execution, the updated vertex values need to be propagated to all the other shards in disk, which results in extensive IOs. The TurboGraph system [15] addresses these issues with a new computing model, pin-and-sliding (PAS), and by using expensive SSD. PAS aims to process incoming, partial graph data without delay, but it is applicable only to certain embarrassingly parallel algorithms. The X-Stream system [14] explores a different,
edge-centric processing (ECP) model. However, it is done by writing the partial, intermediate results to disk for subsequent processing, which doubles the sequential computation cost and data loading overhead. Since the ECP model uses very different APIs from previous vertex-centric based system, the user need to re-implement many graph algorithms on ECP which causes high development overhead. Moreover, certain important graph algorithms such as community detection [17] cannot be implemented on the ECP model (explained in Section IV-B).
In this work, we present VENUS, a disk-based graph computation system that is able to handle billion-scale problems very efficiently on a moderate PC. Our main contributions are summarized as follows.
A novel computing model. VENUS supports vertex-centric computation with streamlined processing. We propose a novel graph storage scheme which allows to stream in the graph data while performing computation. The streamlined processing can exploit the large sequential bandwidth of a disk as much as possible and parallelize computation and disk IO at the same time. Particularly, the vertex values are cached in a buffer in order to minimize random IOs, which is much more desirable in disk-based graph computation where the cost of disk IO is often a bottleneck. Our system also significantly reduces the amount of data to be accessed, generates less shards than the existing scheme [3], and effectively utilizes large main memory with a provable performance guarantee.
Two new IO-friendly algorithms. We propose two IO-friendly algorithms to support efficient streamlined processing. In managing the computing states of all shards, the first algorithm stores vertex values of each shard into different files for fast retrieval during the execution. It is necessary to update on all such files timely once the execution of each shard is finished. The second algorithm applies merge-join to construct all vertex values on the fly. Our two algorithms adapt to memory scaling with less sharding overhead, and smoothly turn into the in-memory mode when the main memory can hold all vertex values.
A new analysis method. We analyze the performance of our vertex-centric streamlined processing computation model and other models, by measuring the amount of data transferred between disk and main memory per iteration. We show that VENUS reads (writes) significantly less amount of data from (to) disk than other existing models including GraphChi. Based on this measurement, we further find that the performance of VENUS improves gradually as the available memory increases, whereas the performance of existing scheme [3], and effectively utilizes large main memory with a provable performance guarantee.
Extensive experiments. We did extensive experiments using both large-scale real-world graphs and large-scale synthetic graphs to validate the performance of our approach. Our experiments look into the key performance factors to all disk-based systems including computational time, the effectiveness of main memory utilization, the amount of data read and write, and the number of shards. And we found that VENUS is up to 3x faster than GraphChi and X-Stream, two state-of-the-art disk-based systems.
The rest of the paper is organized as follows. Section II gives an overview of our VENUS system, which includes a disk-based architecture, graph organization and storage, and an external computing model. Section III presents algorithms to substantize our processing pipeline. We extensively evaluate VENUS in Section IV. Section V reviews more related work. Section VI concludes the paper.
II. SYSTEM OVERVIEW
VENUS is based on a new disk-based graph computing architecture, which supports a novel graph computing model: vertex-centric streamlined processing (VSP) such that the graph is sequentially loaded and the update functions are executed in parallel on the fly. To support the VSP model, we propose a graph storage scheme and an external graph computing model that coordinates the graph computation and with CPU, memory and disk access. By working together, the system significantly reduces the amount of data to be accessed, generates a smaller number of shards than the existing scheme [3], and effectively utilizes large main memory with provable performance guarantee.
A. Architecture Overview
The input is modeled as a directed graph $G = (V, E)$, where $V$ is a set of vertices and $E$ is a set of edges. Like existing work [18], [19], the user can assign a mutable vertex value to each vertex and define an arbitrary read-only edge value on each edge. Note that this does not result in any loss of expressiveness, since mutable data associated with edge $(u, v)$ can be stored in vertex $u$. Let $(u, v)$ be a directed edge from node $u$ to node $v$. Node $u$ is called an in-neighbor of $v$, and $v$ an out-neighbor of $u$. $(u, v)$ is called an in-edge of $v$ and an out-edge of $u$, and $u$ and $v$ are called the source and destination of edge $(u, v)$ respectively.
Most graph tasks are iterative and vertex-centric in nature, and any update of a vertex value in each iteration usually involves only its in-neighbors’ values. Once a vertex is updated,
it will trigger the updates of its out-neighbors. This dynamic continues until convergence or certain conditions are met. The disk-based approach organizes the graph data into a number of shards on disk, so that each shard can fit in the main memory. Each shard contains all needed information for computing updates of a number of vertices. One iteration will execute all shards. Hence there is a huge amount of disk data to be accessed per iteration, which may result in extensive IOs and become a bottleneck of the disk-based approach. Therefore, a disk-based graph computing system needs to manage the storage and the use of memory and CPU in an intelligent way to minimize the amount of disk data to be accessed.
VENUS, its architecture depicted in Fig. 1, makes use of a novel management scheme of disk storage and the main memory, in order to support vertex-centric streamlined processing. VENUS decomposes each task into two stages: (1) offline preprocessing and (2) online computing. The offline preprocessing constructs the graph storage, while the online computing manages the interaction between CPU, memory, and disk.
B. Vertex-Centric Streamlined Processing
VENUS enables vertex-centric streamlined processing (VSP) on our storage system, which is crucial in fast loading of graph data and rapid parallel execution of update functions. As we will show later, it has a superior performance with much less data transfer overhead. Furthermore, it is more effective in main memory utilization, as compared with other schemes. We will elaborate on this in Section II-C. To support streamlined processing, we propose a new graph sharding method, a new graph storage scheme, and a novel external graph computing model. Let us now provide a brief overview of our sharding, storage, and external graph computing model.
Graph sharding. Suppose the graph is too big to fit in the main memory. Then how it is organized on disk will affect how it will be accessed and processed afterwards. VENUS splits the vertices set $V$ into $P$ disjoint intervals. Each interval defines a g-shard and a v-shard, as follows. The g-shard stores all the edges (and the associated attributes) with destinations in that interval. The v-shard contains all vertices in the g-shard, which includes the source and destination of each edge. Edges in each g-shard are ordered by destination, where the in-edges (and their associated read-only attributes) of a vertex are stored consecutively as a structure record. There are $|V|$ structure records in total for the whole graph. The g-shard and v-shard corresponding to the same vertex interval make a full shard. To illustrate the concepts of shard, g-shard, and v-shard, consider the graph as shown in Fig. 3. Suppose the vertices are divided into three intervals: $I_1 = [1, 3]$, $I_2 = [4, 6]$, and $I_3 = [7, 9]$. Then, the resulting shards, including g-shards and v-shards, are listed in Table I.
In practice, all g-shards are further concatenated to form the structure table, i.e., a stream of structure records (Fig. 2). Such a design allows executing vertex update on the fly, and is crucial for VSP. Using this structure, we do not need to load the whole subgraph of vertices in each interval before execution as in GraphChi [3]. Observing that more shards usually incur more IOs, VENUS aims to generate as few number of shards as possible. To this end, a large interval is preferred provided that the associated v-shard can be loaded completely into the main memory, and there is no size constraint on the g-shard. Once the vertex values of vertices in v-shard is loaded and then held in the main memory, VENUS can readily execute the update functions of all vertex with only “one sequential scan” over the structure table of the g-shard. We will discuss how to load and update vertex values for vertices in each v-shard in Section III.
Graph storage. We propose a new graph storage that aims to reduce the amount of data to be accessed. Recall that the graph data consists of two parts, the read-only structure records, called structure data, and the mutable vertex values, called value data. We observe that in one complete iteration, the entire structure data need to be scanned once and only once, while the value data usually need to be accessed multiple times as a vertex value is involved in each update of its out-neighbors. This suggests us to organize the structure data as consecutive pages, and it is separated from the value data. As such, the access of the massive volume structure data can be done highly efficiently with one sequential scan (sequential IOs). Specifically, we employ an operating system file, called the structure table which is optimized for sequential scan, to store the structure data.
Note that the updates and repeated reads over the value data can result in extensive random IOs. To cope with this, VENUS deliberately avoids storing a significant amount of structure data into main memory as compared with the existing
system GraphChi [3] and instead caches value data in main memory. For simplicity of presentation, we assume all value records in a disk table, which we call the value table. The value table is implemented as a number of consecutive disk pages, containing the \(|V|\) fixed length value records, each per vertex. For simplicity of presentation, we assume all value records are arranged in ascending order (in terms of vertex ID). Our key observation is that, for most graph algorithms, the mutable value on a directed edge \((u, v)\) can be computed based on the mutable value of vertex \(u\) and the read-only attribute on edge \((u, v)\). Consequently, we can represent all mutable values of the out-edges of vertex \(u\) by a fixed-length mutable value on vertex \(u\).
**External computing model.** Given the above description of graph sharding and storage, we are ready to present our graph computing model which processes the incoming stream of structure records on the fly. Each incoming structure record is passed for execution as the structure table is loaded sequentially. A higher execution manager is deployed to start new threads to execute new structure records in parallel, whenever possible. A structure record is removed from main memory immediately after its execution, so as to make room for next processing. On the other hand, the required vertex values of the active shard are obtained based on v-shard, and are buffered in main memory throughout the execution of that shard. As a result, the repeated access of the same vertex value can be done in the buffer even for multiple shards. We illustrate the above computing process in Fig. 2.
We use the graph in Fig. 3 to illustrate and compare the processing pipelines of VENUS and GraphChi. The sharding structures of VENUS are shown in Table I, and those for GraphChi are in Table II where the number of shards is assumed to be 4 to reflect the fact that GraphChi usually uses more shards than VENUS. To begin, VENUS first loads v-shard of \(I_1\) into the main memory. Then we load the g-shard in a streaming fashion from disk. As soon as we are done loading the in-edges of vertex 1 (which include \((2, 1), (4, 1), (6, 1),\) and \((8, 1))\), we can perform the value update on vertex 1, and at the same time, we load the in-edges of vertices 2 and 3 in parallel. In contrast, to perform computation on the first interval, GraphChi needs to load all related edges (shaded edges in the table), which include all the in-edges and out-edges for the interval. This means that for processing the same interval, GraphChi requires more memory than VENUS. So under the same memory constraint, GraphChi needs more shards. More critically, because all in-edge and out-edges must be loaded before computation starts, GraphChi cannot parallelize IO operations and computations like VENUS.
**C. Analysis**
We now compare our proposed VSP model with two popular single-PC graph computing models: the parallel sliding windows model (PSW) of GraphChi [3] and the edge-centric processing model (ECP) of X-Stream [14]. Specifically, we look at three evaluation criteria: 1) the amount of data transferred between disk and main memory per iteration; 2) the number of shards; and 3) adaptation to large memory.
There are strong reasons to develop our analysis based on the first criterion, i.e., the amount of data transfer: (i) it is fundamental and applicable for various types of storage systems, including magnetic disk or solid-state disk (SSD), and various types of memory hierarchy including on-board cache/RAM and RAM/disk; (ii) it can be used to derive IO complexity as in Section III-C, which can be based on accessing that certain amount of data with block device; and (iii) it helps us to examine other criteria, including the number of shards and large memory adaption. We summarize the results in Table IV, and show the details of our analysis below. Note that the second criterion is related closely to IO complexity, and the third criterion examines the utilization of memory.
For easy reference, we list the notation in Table III. For our VSP model, \(V\) is split into \(P\) disjoint intervals. Each interval \(I\) has a g-shard and a v-shard. A g-shard is defined as
\[
gs(I) = \{ (u,v) | v \in I \},
\]
and a v-shard is defined as
\[
vs(I) = \{ u | (u,v) \in gs(I) \lor (v,u) \in gs(I) \}.
\]
Note that \( vs(I) \) can be split into two disjoint sets \( I \) and \( S(I) \), where \( S(I) = \{ u \notin I | (u,v) \in gs(I) \} \). We have
\[
\sum_{I} |S(I)| = \sum_{I} |gs(I)| = m.
\]
Let \( \delta \) be a scalar such that
\[
\sum_{I} |S(I)| = \delta m, \text{ where } 0 \leq \delta \leq 1.
\]
It can be seen that
\[
\sum_{I} |vs(I)| = \sum_{I} |S(I)| + \sum_{I} |I| = \delta m + n.
\]
Let \( C \) be the size of a vertex value record, and let \( D \) be the size of one edge field within a structure record. We use \( B \) to denote the size of a disk block that requires unit IO to access it. According to [14], \( B \) equals to 16MB for hard disk and 1MB for SSD.
**Data transfer.** For each iteration, VENUS loads all g-shards and v-shards from disk, which needs \( Dm \) and \( C(n + \delta m) \) data read in total. When the computation is done, VENUS writes v-shards back to disk which incurs \( Cn \) data write. Note that all g-shards are read-only.
Unlike VENUS where each vertex can access the values of its neighbors through v-shard, GraphChi accesses such values from the edges. So the data size of each edge is \((C + D)\). For each iteration, GraphChi processes one shard at a time. The processing of each shard is split into three steps: (1) loading a subgraph from disk; (2) updating the vertices and edges; (3) writing the updated values to disk. In steps 1 and 3, each vertex will be loaded and written once which incurs \( Cn \) data read and write. For edges data, in the worst case, each edge is accessed twice (once in each direction) in step 1 which incurs \( 2(C + D)m \) data read. If the computation updates edges in both directions in step 2, the size of data write of edges in step 3 is also \( 2(C + D)m \). So the data read and write in total are \( Cn + 2(C + D)m \).
In the disk-based engine of X-Stream, one iteration is divided into (1) merged scatter/shuffle phase and (2) gathering phase. In phase 1, X-Stream loads all vertex value data and edge data, and for each edge it writes an update to disk. Since updates are used to propagate values passed from neighbors, we suppose the size of an update is \( C \). So for phase 1, the size of read is \( Cn + Dm \) and the size of write is \( Cm \). In phase 2, X-Stream loads all updates and update each vertex, so the size of data read is \( Cm \) and the size of write is \( Cn \). So for one full pass over the graph, the size of read is \( Cn + (C + D)m \) and the size of write is \( Cm \) in total.
**Number of shards.** For interval \( I \), VENUS only loads the v-shard \( vs(I) \) into memory and the g-shard \( gs(I) \) is loaded in a streaming fashion. So the number of shards is determined by the total size of v-shards and we have \( P = \frac{C(n + \delta m)}{M} \). In contrast, GraphChi loads both vertex value data and edge data for each interval, so the number of shards \( P \) in GraphChi is \( \frac{Cn + 2(C + D)m}{M} \). In X-Stream, edges data are also loaded in a streaming fashion, so the number of intervals is \( P = \frac{Cn}{M} \).
We can see that the number of shards constructed in VENUS is always smaller than that in GraphChi. In Section III, we will show that the smaller of the number of shards, the lower of IO complexity.
**Adaptation to large memory.** As analyzed above, for our VSP model, the size of data needed to read in one iteration is \( C(n + \delta m) + Dm \). So one way to improve performance is to decrease \( \delta \). Here we show that \( \delta \) does decrease as the size of available memory increases, which implies that VENUS can exploit the main memory effectively. Suppose the memory size is \( M \), and the vertex set \( V \) is split into \( P \) intervals \( I_1, I_2, \ldots, I_P \), where \( vs(I_i) \leq M \) for \( i = 1, \ldots, P \). Then, by definition, \( \delta m = \sum_{i=1}^{P} |S(I_i)| \). Now, consider a larger memory size \( M' \) such that \( M' \geq \sum_{i=1}^{P} |S(I_i)| + |vs(I_2)| - M \). Under the memory size \( M' \), we can merge interval \( I_1 \) and \( I_2 \) into \( I_1 \), because \( |vs(I_1)| \leq |S(I_1)| + |vs(I_2)| \leq M' \). Suppose \( \delta' m = |S(I_1)| + \sum_{i=3}^{P} |S(I_i)| \). By the definition of \( S(I) \), it can be shown that \( S(I_1) \subseteq S(I_1) \cup S(I_2) \), and thus \( |S(I_1)| \leq |S(I_1)| + |S(I_2)| \). Therefore we have \( \delta' \leq \delta \), which means as \( M \) increases, \( \delta \) becomes smaller. When \( M \geq Cn \), we have \( P = 1 \). In such a single shard case, the data size of read reaches the lower bound \( Cn + Dm \).
### III. Algorithm
In this section, we discuss the full embodiment of our vertex-centric streamlined processing model, to describe the details of our graph storage design, the online computing state management, and the main memory usage. It consists of two IO-friendly algorithms with different flavor and IO complexity in implementing the processing of Section II. Note that the IO results here are consistent with the data transfer size results in Section II because the results here are obtained with optimization specialized for disk-based processing to transfer the same amount of data. Since the computation is always centered on an active shard, the online computing state mainly consists of the v-shard values that belong to the active shard.
Our first algorithm materializes all v-shard values in each shard, which supports fast retrieval during the online computing. However, in-time view update on all such views is necessary once the execution of each shard is finished. We employ an efficient scheme to exploit the data locality in all materialized views. And this scheme shares a similar spirit with the parallel sliding window of [3], with quadratic IO performance to \( P \), namely the number of shards. In order to avoid the overhead of view maintenance at run time, our second algorithm applies “merge-join” to construct all v-shard values on-the-fly, and updates the active shard only. The second algorithm has an IO complexity linear to \( P \). Finally, as the RAM becomes large, the two algorithms adapt to the memory scaling with less sharding overhead, and finally the two algorithms automatically work in the in-memory mode to seamlessly integrate the case when the main memory can hold all vertex values.
#### A. Physical Design And The Basic Procedure
**The tables.** The value table is implemented as a number of consecutive disk pages, containing \( |V| \) fixed-length value records, each per vertex. For the ease of presentation, we assume all value records are arranged in the ascending order of their IDs in the table. For an arbitrary vertex \( v \), the disk
Therefore, we use two data structures to store vertex values, i.e., a vertex value record and a value buffer. In VENUS, there is a basic execution routine, Procedure ExecuteVertex, which represents the unit task that is being assigned and executed by multiple cores in the computer. Moreover, Procedure ExecuteVertex serves as a common routine that all our algorithms are built upon it, where the simplest one is the in-memory mode to be explained below.
Procedure ExecuteVertex takes a vertex \( v \in I \), the structure record \( R(v) \), the value buffer \( VB \) (call-by-reference), and the current interval \( I \) as its input. The value buffer \( VB \) maintains all latest vertex values of \( v \)-shard \( vs(I) \) of interval \( I \). In VB, we use two data structures to store vertex values, i.e., a frame table and a map. Note that \( vs(I) \) can be split into two disjoint vertex sets \( I \) and \( S(I) \). The frame table maintains all pinned value table pages of the vertices within interval \( I \); the map is a dictionary of vertex values for all vertices within \( S(I) \). Therefore, VB supports the fast look-up of any vertex value of the current \( v \)-shard \( vs(I) \). Procedure ExecuteVertex assumes the map of VB already includes all vertex values for \( S(I) \).
The basic procedure. In VENUS, there is a basic execution procedure, namely, Procedure ExecuteVertex, which represents the unit task that is being assigned and executed by multiple cores in the computer. Moreover, Procedure ExecuteVertex also serves as a common routine that all our algorithms are built upon it, where the simplest one is the in-memory mode to be explained below.
Algorithm 1: Execute One Iteration with Views
1. let \( I \) be the first interval;
2. load \( view(I) \) into the map of VB;
3. foreach \( R(v) \) in the structure table do
4.
if \( v \notin I \) then
5. foreach internal \( J \neq I \) do
6. \( view(J) \); UpdateActiveWindowToFile();
7. unpin all pages and empty the map, in VB;
8. set \( I \) to be the next interval;
9. load \( view(I) \) into the map of VB;
10. ExecuteVertex \( (v, R(v), VB, I) \)
11. return;
How to realize this is addressed in Section III-B. Suppose vertex \( s \) is a neighbor of \( v \), if the value table page of \( s \) has not been loaded into the frame table yet, we pin the value table page of \( s \) at Line 4. After all required vertex values for \( R(v) \) are loaded into memory, we execute the user-defined function, \( UpdateVertex() \), to update the value record of \( v \) at Line 6. This may implicitly pin the value table page of \( v \). All pages will be kept in the frame table of VB for later use, until an explicit call to unpin them.
Consider the graph in Fig. 3 and its sharding structures in Table I. Suppose \( I = I_1 \). For the value buffer VB, the frame table contains value table pages of \( v \), in Vertex 1, 2, and 3 in \( I_1 \), and the map contains vertex values of vertex 4, 6, 8, and 9 in \( S(I_1) \).
We can now explain our in-memory mode. It requires that the entire value table be held in the main memory and hence only one shard exists. In this mode, The system performs sequential scan over the structure table from disk, and for each structure record \( R(v) \) we encountered, an executing thread starts Procedure ExecuteVertex for it on the fly. In Procedure ExecuteVertex, note that \( I \) includes all vertices in \( V \) and the map in VB is empty. Upon the end of each call of Procedure ExecuteVertex, \( R(v) \) will be no longer needed and be removed immediately from the main memory for space-saving. So we stream the processing of all structure records in an iteration. After an explicitly specified number of iterations have been done or the computation has converged, we can unpin all pages in VB and terminate the processing. To overlap disk operations as much as possible, all disk accesses over structure table and value table are done by concurrent threads, and multiple executing threads are concurrently running to execute all subgraphs.
B. Two Algorithms
When all vertex values cannot be held in main memory, the capacity of VB is inadequate to buffer all value pages.
The in-memory mode described above cannot be directly applied in this case, otherwise there will be seriously system thrashing. Based on the discussion of Section II-B, we split $V$ into $P$ disjoint intervals, such that the vertex values of each $v$-shard can be buffered into main memory.
In this case, we organize the processing of a single shard to be extendible in terms of multiple shards. The central issue here is how to manage the computing states of all shards to ensure the correctness of processing. This can be further divided into two tasks that must be fulfilled in executing each shard:
- constructing the map of $VB$ so that the active shard can be executed based on Procedure `ExecuteVertex` according to the previous discussion;
- synchronizing intermediate results to disk so that the latest updates are visible to any other shard to comply with the asynchronous parallel processing [3].
Note that these two tasks are performed based on the $v$-shard and the value table. In summary, the system still performs sequential scan over the structure table from disk, and continuously loads each structure record $R(v)$ and executes it with Procedure `ExecuteVertex` on the fly. Furthermore, the system also monitors the start and the end of the active shard, which triggers a call to finish the first and/or the second tasks. This is the framework of our next two algorithms.
**The algorithm using dynamical view.** Our first algorithm materializes all $v$-shard values as a `view` for each shard, which is shown in Algorithm 1. Specifically, we associate each interval $I$ with `view(I)` which materializes all vertex values of vertices in $S(I)$. Thus the first task is to load this view into the map of $VB$, which is done for Line 2 or Line 9. Then, at the time when we finish the execution of an active shard and before we proceed to the next shard, we need to update the views of all other shards to reflect any changes of vertex values that can be seen by any other shard (Line 5 to Line 6). To do this efficiently, we exploit the data locality in all materialized views.
Specifically, the value records of each view are ordered by their vertex ID. So in every view, say the $i$-th view for the $i$-th shard, all the value records for the $j$-th interval, $i \neq j$, are stored consecutively. And more importantly, the value records in the $(j+1)$-th interval are stored immediately after the value records for the $j$-th. Therefore, similar to the parallel sliding window of [3], when the active shard is shift from an interval to the next, we can also maintain an active sliding window over each of views. And only the active sliding window of each view is updated immediately after we finish the execution of an active shard (Line 6).
Consider the example in Fig. 3 and Table I. For computation on interval $I_2$, loading the vertex values in $S(I_2)$ can be easily done with one sequential disk scan over `view(I_2)`, because the latest vertex values are already stored in `view(I_2)`. After computation, we need to propagate the updated value records to other intervals. In this example, we update those vertex values in the active sliding windows of `view(I_1)` and `view(I_3)` (shaded cells in Table I).
**Algorithm 2: Execute One Iteration with Merge-Join**
1. let $I$ be the first interval;
2. join $S(I)$ and the value table to populate the map of $VB$;
3. foreach $R(v)$ in the structure table do
4. if $v \notin I$ then
5. unpin all pages and empty the map, in $VB$;
6. set $I$ to be the next interval;
7. join $S(I)$ and the value table to populate the map of $VB$;
5. `ExecuteVertex(v, R(v), VB, I)`
9. return;
The algorithm using merge-join. Our second algorithm uses merge-join over the $v$-shard and the value table. Its main advantage is without the overhead to maintain all views at run time. It is shown in Algorithm 2. Specifically, we join $S(I)$ for each interval $I$ with the value table to obtain all vertex values of $S(I)$. Since both $S(I)$ and the value table are sorted by the vertex ID, it is easy to use a merge-join to finish that quickly. The join results are inserted into the map of $VB$ at Line 2 and Line 7. All vertex values are directly updated in the value table, and any changes of vertex values are immediately visible to any other shard.
Again, we consider the example in Fig. 3 and Table I. Suppose that we want to update interval $I_1$. First, we need to load $S(I_1) = \{4, 6, 8, 9\}$ into the map of $VB$. To load $S(I_1)$, we use a merge-join over the vertex table and $S(I_1)$. Since the vertex table and $S(I_1)$ are both sorted by vertex ID, we just need one sequential scan over the vertex table. The updated values of vertices in $I_1$ are written to the value table directly which incurs only sequential IOs.
Finally, as the RAM becomes large enough to hold the complete value table, only one shard and one interval for all vertices presents. The view/merge-join is no longer needed. Both algorithms automatically work in the in-memory mode.
**C. Analysis of IO costs**
To compare the capabilities and limitations of the two algorithms, we look at the IO costs of performing one iteration of graph computation using the theoretical IO model [20]. In this model, the IO cost of an algorithm is the number of block transfers from disk to main memory adding the number of non-sequential seeks. So the complexity is parametrized by the size of block transfer, $B$.
For Algorithm 1, the size of data read is $C(n + \delta m) + Dm$ obtained from Table IV. Since loading does not require any non-sequential seeks, the number of read IOs is \( \frac{C(n + \delta m) + Dm}{B} \). On the other hand, to update all $v$-shards data, the number of block transfers is \( \frac{C(n + \delta m)}{B} \). In addition, in the worst case, each interval requires $P$ non-sequential seeks to update the views of other shards. Thus, the total number of non-sequential seeks for a full iteration has a cost of $P^2$. So the total number of write IOs of Algorithm 1 is \( \frac{C(n + \delta m)}{B} + P^2 \).
For Algorithm 2, the number of read IOs can be analyzed by considering the cost of merge-join for $P$ intervals, and then
adding to this the cost of loading the structure table. The cost of merge-join for each interval is $\frac{C_{m} + 2(C_{n} + D)m}{B} + p^2$. The size of structure table is $Dm$. Thus, the total number of read IOs is $P\frac{C_{m} + Dm}{B} + P$. For interval $I$, the cost of updating the value table is $\frac{C_{n} I}{B}$. Hence, the total number of write IOs is $\sum_i \frac{C_{n} I}{B} = \frac{C_{n} a}{B}$.
Table V shows the comparison of GraphChi, X-Stream, and our algorithms. We can see that the IO cost of Algorithm 1 is always less than GraphChi. Also, when $P$ is small, the numbers of read IOs of Algorithm 1 and Algorithm 2 are similar, but the number of write IOs of Algorithm 2 is much smaller than that of Algorithm 1. These results can guide us in choosing proper algorithms for different graphs.
IV. PERFORMANCE
In this section, we evaluate our system VENUS and compare it with two most related state-of-the-art systems, GraphChi [3] and X-Stream [14]. GraphChi uses the parallel sliding window model and is denoted as PSW in all figures. X-Stream employs the edge-centric processing model and thus is denoted as ECP. Our system is built on the vertex-centric streamlined processing (VSP) model which is implemented with two algorithms: Algorithm 1 materializes vertex values in each shard for fast retrieval during the execution, which is denoted as VSP-I; Algorithm 2 applies merge-join to construct all vertex values on the fly, which is denoted as VSP-II. The two algorithms use the same vertex-centric update function for a graph task and an input parameter indicates which algorithm should be used. VENUS is implemented in C++.
We ran each experiment three times and reported the averaged execution time. We deliberately avoid caching disk pages by the operating system as explained in Section IV-A. All algorithms are evaluated using hard disk, so we do not include TurboGraph [15] due to its requirement of SSD drive on the computer. Like GraphChi and X-Stream, VENUS allows user to explicitly specify a main memory budget. Specifically, we spend half of the main memory budget for the frame table in VB, which are managed based on the LRU replacement strategy; and another $\frac{4}{5}$ of the main memory budget is for the map in VB leaving the rest memory for storing auxiliary data. All experiments are conducted on a commodity machine with Intel i7 quad-core 3.4GHz CPU, 16GB RAM, and 4TB hard disk, running Linux.
We mainly examine four important aspects of a system which are key to its performance: 1) computational time; 2) the effectiveness of main memory utilization; 3) the amount of data read and write; and 4) the number of shards. We experiment over 4 large real-world graph datasets, Twitter [21], clueweb12 [22], Netflix [23], and Yahoo! Music user ratings used in KDD-Cup 2011 [24] as well as synthetic graphs. We use the SNAP graph generator\(^1\) to generate 4 random power-law graphs, with increasing number of vertices, where the power-law degree exponent is set as 1.8. The data statistics are summarized in Table VI. We consider the following data mining tasks, 1) PageRank [25]; 2) Weakly Connected Components (WCC) [17]; 3) Community Detection (CD) [17]; and 4) Alternating Least Squares (ALS) [26]. Our four algorithms essentially represent graph applications in the categories of iterative matrix operations (PageRank), graph mining (WCC and CD), and collaborative filtering (ALS). Certain graph algorithms like belief propagation [3], [14] that require the mutable value of one edge to be computed recursively based on the mutable values of other edges, cannot be implemented on VENUS without modifications.
In Table VII, we report the preprocessing time of GraphChi and VENUS under 8GB memory budget. The preprocessing step of our system is split into 3 phases: (1) counting degree for each vertex (requiring one pass over the input file) and dividing vertices into P intervals; (2) writing each edge to a temporary scratch file of the owning interval; and (3) sorting edges by their source vertices in each scratch file to construct a v-shard and g-shard file; in constructing the v-shard and g-shard file in (3), the processing consists of merging adjacent intervals and counting distinct vertices in those corresponding scratch files till the memory budget is reached. These phases are identical to those used in GraphChi except that we further merge adjacent intervals in phase 3. The total IO cost of preprocessing is $5\frac{I}{B} + \frac{V}{B}$ ($B$ is block size) which is the same as GraphChi [3]. Therefore, the preprocessing cost is proportional to the graph size. These results are verified in Table VII. Note that there is no preprocessing in X-Stream.
A. Exp-1: PageRank on Twitter Graph
The first experiment runs PageRank on the Twitter graph. We compare the four algorithms (PSW, ECP, VSP-I, VSP-II) under various memory budgets from 0.5GB to 8GB. Note that PSW and VSP-I/VSP-II use Linux system calls to access data from disk, where the operating system caches data in its pagecache. This allows PSW and VSP-I/VSP-II to take advantage of extra main memory in addition to the memory budget. On the other hand, X-Stream uses direct IO and does not benefit from this. Therefore, for the sake of fairness, we
---
\(^1\)http://github.com/snap-stanford/snap
---
<table>
<thead>
<tr>
<th>TABLE V</th>
<th>TABLE VI</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>BIG-O bounds in the IO model of single-machine graph processing systems</strong></td>
<td></td>
</tr>
<tr>
<td>System</td>
<td># Read IO</td>
</tr>
<tr>
<td>----------------</td>
<td>-----------</td>
</tr>
<tr>
<td>GraphChi [3]</td>
<td>$C_{m} + 2(C_{n} + D)m + p^2$</td>
</tr>
<tr>
<td>X-Stream [14]</td>
<td>$C_{n}(I + n + I)m + \frac{2D}{B} + \frac{2D}{B} \log \frac{M}{P}$</td>
</tr>
<tr>
<td>Alg. 1</td>
<td>$P\frac{C_{m} + Dm}{B}$</td>
</tr>
<tr>
<td>Alg. 2</td>
<td>$P\frac{C_{m} + Dm}{B}$</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Dataset</th>
<th># Vertex</th>
<th># Edge</th>
<th>Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>Twitter</td>
<td>41.7 million</td>
<td>1.4 billion</td>
<td>Directed</td>
</tr>
<tr>
<td>clueweb12</td>
<td>978.4 million</td>
<td>42.5 billion</td>
<td>Directed</td>
</tr>
<tr>
<td>Netflix</td>
<td>0.5 million</td>
<td>99.0 million</td>
<td>Bipartite</td>
</tr>
<tr>
<td>KDDCup</td>
<td>1.0 million</td>
<td>252.8 million</td>
<td>Bipartite</td>
</tr>
<tr>
<td>Synthetic-4m</td>
<td>4 million</td>
<td>54.37 million</td>
<td>Directed</td>
</tr>
<tr>
<td>Synthetic-6m</td>
<td>6 million</td>
<td>86.04 million</td>
<td>Directed</td>
</tr>
<tr>
<td>Synthetic-8m</td>
<td>8 million</td>
<td>118.58 million</td>
<td>Directed</td>
</tr>
<tr>
<td>Synthetic-10m</td>
<td>10 million</td>
<td>151.99 million</td>
<td>Directed</td>
</tr>
</tbody>
</table>
use pagecache-management\textsuperscript{2} to disable pagecache in all our experiments.
The results of processing time are reported in Fig. 4(a), where we can see that VSP is up to 3x faster than PSW and ECP. For example, in the case that the budget of main memory is 8GB, PSW spends 1559.21 seconds. ECP also needs 1550.2 seconds. However, VSP-I and VSP-II just need 477.39 and 483.47 seconds, respectively. As shown below.
In Fig. 4(b), we compare PSW and VSP in terms of the overall waiting time before executing a next shard. For PSW, it includes the loading and sorting time of each memory shard; for VSP, it includes the time to execute unpin calls, view updates, and merge-join operations. Note that the time of scanning the structure table is evenly distributed among processing all vertices, and is not included here. It can be observed that PSW spends a significant amount of time for processing the shards before execution. In contrast, such waiting time for VSP is much smaller. This is due to that VSP allows to execute the update function while streaming in the structure data. For example, in the case that the budget of main memory is 8GB, PSW spends 749.78 seconds. However, VSP-I and VSP-II just need 104.01 and 102.12 seconds, respectively. Note that about the half share of the processing time of PSW is spent here, which spends far more time than our algorithms.
VSP also generates significantly smaller number of shards than PSW, as shown in Fig. 4(c). For example, in the case that the budget of main memory is 0.5GB and 1GB, PSW generates 90 and 46 number of shards, respectively. And these numbers for our algorithms are 15 and 4. This is because VSP spends the main budget of the memory on the value data of a v-shard, while the space needed to keep related structure data in memory is minimized.
Fig. 4(d) and Fig. 4(e) show the amount of data write and read, respectively. We observe that the data size written/read to/from disk is much smaller in VSP than in the other systems. Specifically, PSW has to write 24.7GB data to disk, and read the same amount of data from disk, per iteration, regardless of memory size. These numbers for ECP are 11GB and 28GB under 8GB memory budget, which are also very large and become a significant setback of ECP in its edge-centric streamlined processing. In sharp contrast, VSP only writes 0.24GB, which is 100X smaller than PSW and ECP, respectively. In terms of data size of read, VSP reads 12.7-16.8GB data under various memory budgets. The superiority of
\textsuperscript{2}https://code.google.com/p/pagecache-mangagement/
VSP in data access is mainly due to the separation of structure data and value data and caching the value data in a fixed buffer.
Although VSP-I and VSP-II perform very closely, they have slight difference in terms of IO performance, as shown in Fig. 4(f). For most cases, VSP-II incurs less IOs than VSP-I because it is free of maintaining the materialized view in executing shards. However, when the memory budget is smaller than 1GB, the number of $l$ increases quickly. In this case, VSP-II is slower due to the heavy access of the value table.
B. Exp-2: More Graph Mining Tasks
After the evaluation under various RAM sizes, we further compare the four algorithms for other graph mining tasks. We set the memory budget as 4GB for all algorithms. In detail, Fig. 5(a) shows the processing time of running WCC and CD over Twitter, where our algorithms, VSP-I and VSP-II, clearly outperform the other competitors. For example, in terms of the WCC task, the existing algorithms, PSW and ECP, spend 1772.57 and 4321.06 seconds, respectively, while our algorithms spend 942.74 and 972.43, respectively. In this task, ECP is much slower than PSW. One reason is that both PSW and our algorithms can employ the selective scheduling [3], [19] to skip unnecessary updates on some vertices/shards. However, this feature is infeasible for ECP because it is edge-centric and thus cannot support selective scheduling of vertices.
For the CD task, Fig. 5(a) shows the performance of PSW and our algorithms. In detail, PSW spends 2317.75 seconds. VSP-I and VSP-II just need 1614.65 and 1617.04 seconds, respectively. The CD task cannot be accomplished by ECP, because CD is based on label propagation [17], where each vertex chooses the most frequent label among its neighbors in the update function. The most frequent label can be easily decided in terms of vertex-centric processing, where all neighbors and incident edges are passed to the update function. However, this is not the case for the edge-centric processing while ECP cannot iterate all incident edges and all neighbors to complete the required operation.
The next task is ALS, which is tested over both datasets of Netflix and KDDCup. The overall processing time is given in Fig. 5(b). In this test, the performance of PSW is much slower than ECP and our algorithms, but ECP is slightly better than both VSP-I and VSP-II. For example, in terms of the ALS task over KDDCup, PSW spends 1446.01 seconds. ECP spends 259.99 seconds. VSP-I and VSP-II spend 357.04 and 190.32 seconds, respectively.
We compare the total data size being accessed per iteration of the four algorithms in Fig. 5(c) and Fig. 5(d). ECP still accesses more data than we do. For example, ECP has to access 2.64GB and 8.23GB disk data, including both read and write, for Netflix and KDDCup, respectively. For VSP-I and VSP-II, the numbers are just 2.41 and 3.96. However, our VSP is slightly slower, because VSP requires more non-sequential seeks than ECP. Finally, note that because VSP-I and VSP-II are both working in the in-memory mode due to the small graph size of Netflix and KDDCup, so they read/write the same amount of data size.
C. Exp-3: The Synthetic Datasets
To see how a system performs on graphs with increasing data size, we also did experiments over the 4 synthetic datasets. We test with PageRank and WCC, and report the running time in Fig. 6(a) and Fig. 6(b) respectively. Again, we see that VSP uses just a fraction of the amount of time as compared to the other two systems.
In general, the processing time increases with the number of vertices. However, the time of PSW and ECP increases much faster than VSP. For example, when the number of vertices increases from 4 million to 10 million, the time of PSW increases by 40.81 and 76.68 seconds for the task of PageRank and WCC, respectively; and the time of ECP increases by 74.13 and 198.72 seconds. In contrast, the time of VSP-I just increases by 21.85 and 49.93 seconds. The superior performance of VSP is mainly due to the less amount of data access per iteration, as shown in Fig. 6(c) and Fig. 6(d).
D. Exp-4: On the Web-Scale Graph
In this experiment, we compare GraphChi, X-Stream, and VENUS on a very large-scale web graph, clueweb12 [22], which has 978.4 million vertices and 42.5 billion edges. We choose not to use yahoo-web [27] which has been used in many previous works [3], [14], because the density of yahoo-web is incredibly low where 53% of nodes are dangling nodes (nodes with no outgoing edges), and testing algorithms and systems on yahoo-web might give inaccurate speed report. On the other hand, the number of edges in clueweb12 are an order larger than 1GB, so they are not suitable for GraphChi and X-Stream. We test with PageRank and WCC, and report the running time in Table VIII.
VENUS significantly outperforms GraphChi and X-Stream by reading and writing less amount of data.

Fig. 6. The Synthetic Datasets
Table VIII
EXPERIMENT RESULTS: PAGE.RANK ON CLUEWEB12
<table>
<thead>
<tr>
<th>System</th>
<th>Time (s)</th>
<th>Read (GB)</th>
<th>Write (GB)</th>
</tr>
</thead>
<tbody>
<tr>
<td>PSW</td>
<td>15,495</td>
<td>661GB</td>
<td>661GB</td>
</tr>
<tr>
<td>ECP</td>
<td>26,702</td>
<td>1,121GB</td>
<td>571GB</td>
</tr>
<tr>
<td>VSP-I</td>
<td>7,074</td>
<td>213GB</td>
<td>43GB</td>
</tr>
<tr>
<td>VSP-II</td>
<td>6,465</td>
<td>507GB</td>
<td>19GB</td>
</tr>
</tbody>
</table>
V. RELATED SYSTEMS
There are several options to process big graph tasks: it is possible to create a customized parallel program for each graph algorithm in distributed setting, but this approach is difficult to generalize and the development overhead can be very high. We can also rely on graph libraries with various graph algorithms, but such graph libraries cannot handle web-scale problems [1]. Recently, graph computing over distributed or single multi-core platform has emerged as a new framework for big data analytics, and it draws intensive interests [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11]. Broadly speaking, all existing systems can be categorized into the so-called data-parallel systems (e.g. MapReduce/Hadoop and extensions) and graph-parallel systems.
The data-parallel systems stem from MapReduce. Since MapReduce does not support iterative graph algorithms originally, there are considerable efforts to leverage and improve the MapReduce paradigm, leading to various distributed graph processing systems including PEGASUS [5], GBase [9], Giraph [28], and SGC [29]. On the other hand, the graph-parallel systems use new programming abstractions to compactly formulate iterative graph algorithms, including Pregel [1], Hama [7], Kingeograph [10], Trinity [11], GRACE [19], [18], Horton [30], GraphLab [2], and ParallelGDB [31]. There is also work trying to bridge the two categories of systems, such as GraphX [32]. As a recent branch of graph-parallel systems, the disk-based graph computing systems, such as GraphChi [3], X-Stream [14], and TurboGraph [15], have shown great potential in graph analytics, which do not need to divide and distribute the underlying graph over a number of machines, as did in previous graph-parallel systems. And remarkably, they can work with just a single PC on very large-scale problems. It is shown that disk-based graph computing on a single PC can be highly competitive even compared to parallel processing over large scale clusters [3].
**Disk-based systems.** The disk-based systems, including GraphChi [3], TurboGraph [15], and X-Stream [14], are closely related to our work. Both GraphChi and VENUS are vertex-centric. Like our system VENUS, GraphChi also organizes the graph into a number of shards. However, unlike VENUS which requires only a v-shard to be fit into the memory, GraphChi requires each shard to be fit in main memory. As a result, GraphChi usually generates many more shards than VENUS under the same memory constraint (Fig. 4(c)), which incurs more data transfer (Fig. 4(d) and Fig. 4(e)) and random IOs. Furthermore, GraphChi starts the computation after the shard is completely loaded and processes next shard after the value propagation is completely done. In contrast, VENUS enables streamlined processing which performs computation while the data is streaming in. Another key difference of VENUS from GraphChi lies in its use of a fixed buffer to cache the v-shard, which can greatly reduce random IOs.
The TurboGraph can process graph data without delay, at the cost of limiting its scope on certain embarrassingly parallel algorithms. In contrast, VENUS can deal with almost every algorithms as GraphChi. Different from VENUS that uses hard disk, TurboGraph is built on SSD. X-Stream is edge-centric.
and allows streamlined processing like VENUS, by storing partial, intermediate results to disk for later access. However, this will double sequential IOs, incur additional computation cost, and increase data loading overhead.
VENUS improves previous systems in several important directions. First, we separate the graph data into the fixed structure table and the mutable value table file, and use a fixed buffer for vertex value access, which almost eliminates the need of batch propagation operation in GraphChi (thus reducing random IOs). Furthermore, each shard in VENUS is not constrained to be fit into memory, but instead, they are concatenated together forming a consecutive file for streamlined processing, which not only removes the batch loading overhead but also enjoys a much faster speed compared to random IOs [14]. Compared to TurboGraph, VENUS can handle a broader set of data mining tasks; compared to X-Stream, VENUS processes the graph data just once (instead of twice in X-Stream) and without the burden of writing the entire graph to disk in the course of computation.
VI. CONCLUSION
We have presented VENUS, a disk-based graph computation system that is able to handle billion-scale problems efficiently on just a single commodity PC. It includes a novel design for graph storage, a new data caching strategy, and a new external graph computing model that implements vertex-centric streamlined processing. In effect, it can significantly reduce data access, minimize random IOs, and effectively exploit main memory. Extensive experiments on 4 large-scale real-world graphs and 4 large-scale synthetic graphs show that VENUS can be much faster than GraphChi and X-Stream, two state-of-the-art disk-based systems. In future work, we plan to improve our selective scheduling of vertex updates and extend our system to SSD, which will further accelerate VENUS greatly.
ACKNOWLEDGMENTS
The work is partly supported by NSFC of China (Grant No. 61103049) and 973 Fundamental R&D Program (Grant No.2014CB834030). The authors would like to thank the anonymous reviewers for their helpful comments.
REFERENCES
|
{"Source-Url": "http://www.cs.cuhk.hk/~cslui/PUBLICATION/ICDE15_Venus.pdf", "len_cl100k_base": 13908, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 48429, "total-output-tokens": 16642, "length": "2e13", "weborganizer": {"__label__adult": 0.00035500526428222656, "__label__art_design": 0.0005955696105957031, "__label__crime_law": 0.0004432201385498047, "__label__education_jobs": 0.0010557174682617188, "__label__entertainment": 0.0002046823501586914, "__label__fashion_beauty": 0.000213623046875, "__label__finance_business": 0.0006990432739257812, "__label__food_dining": 0.0003647804260253906, "__label__games": 0.0007939338684082031, "__label__hardware": 0.003063201904296875, "__label__health": 0.0004603862762451172, "__label__history": 0.0005283355712890625, "__label__home_hobbies": 0.00013554096221923828, "__label__industrial": 0.0007748603820800781, "__label__literature": 0.0003981590270996094, "__label__politics": 0.0004246234893798828, "__label__religion": 0.0005612373352050781, "__label__science_tech": 0.42138671875, "__label__social_life": 0.00013697147369384766, "__label__software": 0.03582763671875, "__label__software_dev": 0.53076171875, "__label__sports_fitness": 0.00020956993103027344, "__label__transportation": 0.00060272216796875, "__label__travel": 0.00022089481353759768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62557, 0.03008]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62557, 0.42536]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62557, 0.89662]], "google_gemma-3-12b-it_contains_pii": [[0, 5795, false], [5795, 11021, null], [11021, 16023, null], [16023, 20397, null], [20397, 27083, null], [27083, 31325, null], [31325, 37544, null], [37544, 44072, null], [44072, 46670, null], [46670, 51633, null], [51633, 55318, null], [55318, 62557, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5795, true], [5795, 11021, null], [11021, 16023, null], [16023, 20397, null], [20397, 27083, null], [27083, 31325, null], [31325, 37544, null], [37544, 44072, null], [44072, 46670, null], [46670, 51633, null], [51633, 55318, null], [55318, 62557, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62557, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62557, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62557, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62557, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62557, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62557, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62557, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62557, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62557, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62557, null]], "pdf_page_numbers": [[0, 5795, 1], [5795, 11021, 2], [11021, 16023, 3], [16023, 20397, 4], [20397, 27083, 5], [27083, 31325, 6], [31325, 37544, 7], [37544, 44072, 8], [44072, 46670, 9], [46670, 51633, 10], [51633, 55318, 11], [55318, 62557, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62557, 0.11312]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
47e44fe3595aecc37fe55cab2d4b76f626cb36ad
|
[REMOVED]
|
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-642-23765-2_5.pdf", "len_cl100k_base": 10606, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 40784, "total-output-tokens": 12556, "length": "2e13", "weborganizer": {"__label__adult": 0.0006704330444335938, "__label__art_design": 0.00972747802734375, "__label__crime_law": 0.0004897117614746094, "__label__education_jobs": 0.002979278564453125, "__label__entertainment": 0.00024235248565673828, "__label__fashion_beauty": 0.0003502368927001953, "__label__finance_business": 0.0003006458282470703, "__label__food_dining": 0.0005617141723632812, "__label__games": 0.0014581680297851562, "__label__hardware": 0.005344390869140625, "__label__health": 0.0007309913635253906, "__label__history": 0.0007991790771484375, "__label__home_hobbies": 0.000339508056640625, "__label__industrial": 0.0010232925415039062, "__label__literature": 0.0005097389221191406, "__label__politics": 0.0003113746643066406, "__label__religion": 0.001007080078125, "__label__science_tech": 0.1680908203125, "__label__social_life": 0.00016701221466064453, "__label__software": 0.0135040283203125, "__label__software_dev": 0.78955078125, "__label__sports_fitness": 0.0004208087921142578, "__label__transportation": 0.0011157989501953125, "__label__travel": 0.0003447532653808594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52556, 0.0238]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52556, 0.77145]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52556, 0.87529]], "google_gemma-3-12b-it_contains_pii": [[0, 2683, false], [2683, 6369, null], [6369, 8144, null], [8144, 11366, null], [11366, 12972, null], [12972, 16559, null], [16559, 19561, null], [19561, 22651, null], [22651, 26233, null], [26233, 28167, null], [28167, 31386, null], [31386, 33714, null], [33714, 37137, null], [37137, 39545, null], [39545, 42151, null], [42151, 44890, null], [44890, 48387, null], [48387, 51463, null], [51463, 52556, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2683, true], [2683, 6369, null], [6369, 8144, null], [8144, 11366, null], [11366, 12972, null], [12972, 16559, null], [16559, 19561, null], [19561, 22651, null], [22651, 26233, null], [26233, 28167, null], [28167, 31386, null], [31386, 33714, null], [33714, 37137, null], [37137, 39545, null], [39545, 42151, null], [42151, 44890, null], [44890, 48387, null], [48387, 51463, null], [51463, 52556, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52556, null]], "pdf_page_numbers": [[0, 2683, 1], [2683, 6369, 2], [6369, 8144, 3], [8144, 11366, 4], [11366, 12972, 5], [12972, 16559, 6], [16559, 19561, 7], [19561, 22651, 8], [22651, 26233, 9], [26233, 28167, 10], [28167, 31386, 11], [31386, 33714, 12], [33714, 37137, 13], [37137, 39545, 14], [39545, 42151, 15], [42151, 44890, 16], [44890, 48387, 17], [48387, 51463, 18], [51463, 52556, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52556, 0.0155]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
20ef65ebf3571d59f9b6fbf0b3523603ca016a9a
|
Open Research Compiler (ORC) for Itanium™ Processor Family
Presenters:
Roy Ju (MRL, Intel Labs)
Sun Chan (MRL, Intel Labs)
Chengyong Wu (ICT, CAS)
Ruiqi Lian (ICT, CAS)
Tony Tuo (MRL, Intel Labs)
Micro-34 Tutorial
December 1, 2001
Agenda
- Overview of ORC
- New Infrastructure Features
- Region-based compilation
- Rich support for profiling
- New IPF* Optimizations
- Predication and analysis
- Control and data speculation
- Global instruction scheduling
- Parameterized machine model
- Research Case Study
- Resource management during scheduling
- Demo of ORC
- Release and Future Plans
* IPF for Itanium Processor Family in this presentation
Overview of ORC
Objectives of ORC
- To provide a leading open source IPF (IA-64) compiler infrastructure to the compiler and architecture research community
- To encourage compiler and architecture research
- To minimize the resource investments for university groups
- Performance better than existing IPF open source compilers
- Fair comparison on a common infrastructure
Requirements for ORC
- Robustness
- solid research compiler infrastructure
- Timing of availability
- to enable research in early IPF systems
- Flexibility
- modularity and clean interface to facilitate prototyping novel ideas
- Performance
- leading performance among IPF open source compilers
- sufficiently high to make research results from this compiler trustworthy
What’s in ORC?
• C/C++ and Fortran compilers targeting IPF
• After evaluation, chose to base on the Pro64 open source compiler from SGI
➔ Retargeted from the MIPSpro product compiler
➔ Mostly meet our requirements
➔ open64.sourceforge.net
• Major components:
➔ Front-ends: C/C++ FE and F90 FE
➔ Interprocedural analysis and optimizations
➔ Loop-nest optimizations
➔ Scalar global optimizations
➔ Code generation
• On the Linux platform
BE Components Inherited from Pro64
• Inter-procedural analysis and optimizations (IPA)
➣ mod/ref summary, aliasing analysis, array section analysis, call tree, inlining, dead function elimination, …
• Loop-nest optimizations (LNO)
➣ Locality opt., parallelization, loop distribution, unimodular transformations, array privatization, OpenMP, …
• Scalar global optimizations (WOPT)
➣ SSA-based partial redundancy elimination, induction variable recognition, strength reduction, ld/st-PRE, copy propagation, …
• A Pro64 tutorial by Gao, Amaral, Dehnert at PACT 2000
➣ http://www.cs.ualberta.ca/~amaral/Pro64/index.html
• A list of publications from MIPSpro posted by SGI.
Intermediate Representations (IR)
• WHIRL:
➢ AST-based IR
➢ To communicate among IPA, LNO, WOPT, and CG
➢ Well documented and released by SGI
• Symbol table:
➢ Document also released by SGI
• CGIR:
➢ Register-based IR used in CG
Flow of IR
Front-ends
Lower aggregates
Un-nest calls
Lower COMMA, RCOMMA
Lower ARRAYs
Lower Complex Numbers
Lower high level control flow
Lower IO
Lower bit-fields
Spawn nested procedures for parallel regions
Lower intrinsics to calls
Generate simulation code for quads
All data mapped to segments
Lower loads/stores to final form
Expose code sequences for constants and addresses
Expose $gp for -shared
Expose static link for nested procedures
Code Generation
ORC Tutorial
What’s new in ORC?
• A largely redesigned CG
• Research infrastructure features:
➤ Region-based compilation
➤ Rich profiling support
➤ Parameterized machine descriptions
• IPF optimizations:
➤ If-conversion and predicate analysis
➤ Control and data speculation with recovery code generation
➤ Global instruction scheduling with resource management
• More beyond the first release
Major Phase Ordering in CG
- edge/value profiling
- region formation
- if-conversion/parallel cmp.
- loop opt. (swp, unrolling)
- global inst. sched. (predicate analysis, speculation, resource management)
- register allocation
- local inst. scheduling
(flexible profiling points)
(new)
(existing)
**Perspective Research Usage of ORC**
- Performance-driven optimizations in all components
- Co-design of compilers and architecture for new hardware features
- Thread-level parallelism
- Retarget to emerging languages (e.g. CLI, Java, …)
- Power management
- Type safety under optimizations
- Optimizations for memory hierarchy
- Program analysis
- Co-design of static and dynamic compilation
- …
The ORC Project
• Initiated by Intel Microprocessor Research Labs (MRL)
• Joint efforts among
➤ Programming Systems Lab, MRL
➤ Institute of Computing Technology, Chinese Academy of Sciences
➤ Intel China Research Center, MRL
• Development efforts started in Q4 2000
• Development team: 10 – 15 people
• First release scheduled for Jan 2002
• Overview of ORC
• New Infrastructure Features
• New IPF Optimizations
• Research Case Study
• Demo of ORC
• Release and Future Plans
Region-based Compilation
Region-based Compilation
• Motivations:
- To form a scope for optimizations
- To control compilation time and space
• What’s a region?
- Nodes: basic blocks
- Edges: control flow transfer
- Acyclic
• Loops impose region boundary
• Exception: irreducible regions
- (Currently) single-entry-multiple-exit
- Regions under hierarchical relations
• Regions could be nested within regions
Features of Region-based Compilation
• Region structure can be constructed and deleted at different optimization phases
➤ Incremental update also supported
• Optimization-guiding attributes at each region, e.g.
➤ No further optimizations, e.g. swp’ed regions
➤ No optimization across region boundary
➤ These attributes need to be preserved if region structure is rebuilt
• Region formation algorithm decoupled from the region structure
➤ Can construct and support different types of regions
• Basic blocks, superblocK/hyperblock, treegion, etc. are special cases of SEME regions
Region-based Compilation
• Utility and support:
- Iterators to traverse regions
- Each region marked with its attributes
- Regional CFG
- Incremental update due to CFG changes
- Validation of regions
• Region formation takes into account:
- Size
- Shape and topology
- Exit probability
- Tail duplication and duplication ratio
Region-based Compilation
• Current ORC implementation
- Region structure constructed right before if-conversion
- Preserved till after global instruction scheduling
- Phases working under regions
- The majority of CG phases
- If-conversion, predicate analysis, loop optimizations, instruction scheduling, speculation w/ recovery code gen, EBO, CFLOW, etc.
- Noticeable exception: register allocation (GRA)
- Different from the incomplete region work in Pro64
Outline of Region Formation
- Form interval regions
- Form MEME regions
- Find a seed with the highest frequency
- Extend to form a hot path and then an MEME region
- Form SEME regions from each MEME region
- SEME Region Formation
1. For node x whose exit probability > threshold, add x to candidate exits
2. For every candidate exit, traverse backward to form a candidate set $m'$
3. Try the candidate exit $y$ with the largest size of $m'$
4. If $m'$ is already SEME, done
5. If $m'$ has side entries, select nodes to cut and compute the duplication ratio
6. If the ratio is within the budget, tail duplicate to trim $m'$ and done
7. If the ratio is beyond the budget, remove $y$ from the candidate exits and go back to step 3
Example of Region Hierarchy
Example of Region Hierarchy (cont.)
Region Tree
(hierarchical relation)
Regional CFG of Region 3
Usage of Region-based Compilation
• Current usage
✐ Forming profitable optimization scopes
• For global instruction scheduling and if-conversion
✐ Controlling compilation time and space
• Perspective research usage
✐ Regions as optimization boundary
✐ Optimization-guiding attributes to propagate info from an opt. phase to a later one
✐ E.g. to support the multi-threading regions partitioned by compilers
✐ Comparing optimizations under different shapes of regions
Rich Support for Profiling
Profiling Support in Pro64
• Pro64 user model:
➢ Edge profiling for execution frequency
➢ Instrumentation and feedback annotation at same point of compilation phase
➢ Consistent optimization levels to ensure the same inputs at both instrumentation and annotation
➢ Later phases maintain feedback correctness through propagation and verification
➢ Instrumentation right after FE
Usage:
- `fb_create` directory-path
- `fb_opt` directory-path
Profiling Support in ORC
- Edge profiling, value profiling, and beyond
- Various instrumentation points in CG
- Same user model as Pro64
- Co-exist with the Pro64’s profiling model
- Optimizations after feedback point update and verify feedback correctness
- Or start a new instrumentation/annotation point to avoid update
ORC Profiling in CG
Usage, backward compatible with Pro64:
• `-fb_create dir-path { -fb_phase n } { -fb_type m }`
• `-fb_opt dir-path { -fb_phase n } { -fb_type m }`
where n is instrumentation point.
• Pro64 model
• beginning of cg
• after if-conversion in cg
Value Profiling
• Profiling the values of instruction operands
• Important tool for limit study or to collect program statistics
• Based on Calder, Feller, Eustace, “Value Profiling”, Micro-30
• Top value tables in feedback file
• Current usage
➤ Profiling target values of loads at the beginning of CG
➤ Profiling values for selective loads at a later phase
Extend to Other Profiling
- Feedback format
- Flexible to extend
- Simple to maintain backward compatibility
- Same format for every phase
- Feedback at different phases go to different feedback files – simple scheme to deal with various profiles
- Perspective research usage
- Extend to memory profiling, data profiling, return value profiling, …
- Collect program statistics
Feedback format
- **fb_header**
- **pu_headers**
- **pu_fb_tables**
- **string table**
```
<table>
<thead>
<tr>
<th>id, version, pu_hdr(sz,#),…</th>
</tr>
</thead>
<tbody>
<tr>
<td>fb_type info:…</td>
</tr>
<tr>
<td>loop(#, sz):… call(#, sz)</td>
</tr>
<tr>
<td>loop1 fb_info:… loopn fb_info</td>
</tr>
<tr>
<td>edge 1 fb_info:… edge n fb_info</td>
</tr>
<tr>
<td>Pu_names</td>
</tr>
</tbody>
</table>
```
• Overview of ORC
• New Infrastructure Features
• New IPF Optimizations
• Research Case Study
• Demo of ORC
• Release and Future Plans
If-conversion and Predicate Analysis
Architecture Support of Predication
• Predicate registers
❯ 64 predicate registers (pr16-pr63 rotating registers)
❯ Predicate register transfers:
• mov pr = … / mov … = pr / mov pr.rot = /… …
• Conditional execution
❯ Qualifying predicate: to decide if the guarded instructions modify architectural state
• Compare instructions
❯ Normal compare
❯ Unconditional compare
❯ Parallel compare: and, or, and/orcm, or/andcm
Compilation Support of Predication
• If-conversion
- Converts control flow (branches eliminated) to predicated instructions
- Generates parallel compare instructions
- Invoked after region formation and before loop optimization
- Replaces the hyperblock formation in Pro64
• Predicate analysis
- Analyze relations among predicates and control flow
- Relations stored in Predicate Relation Database (PRDB)
- Interface provided to query PRDB
- PRDB can be deleted and recomputed as wish without affecting correctness
If-conversion
• Simple and effective framework
↗ Step 1: select candidates
• Checking adjacent nodes to match one of three types
serial if-then if-then-else
• Iterative detection within a region
↗ Step 2: convert selected candidates to predicated code
Generation of Parallel Compare
- Part of the if-conversion phase
- Step 1: profitability checking
- Current heuristics to detect simple patterns
- Step 2: inserting parallel compare instructions
- Example
```c
if ( a > b && c > d && e > f )
s1;
else
s2;
```
```c
p, q = 1
cmp. gt. and. orcm p,q = a,b
cmp.gt.and.orcm p,q = c,d
cmp.gt.and.orcm p,q = e,f
(p) s1
(q) s2
```
Features of If-conversion
• Effective and extensible cost model
➤ Taking into account
• critical path length
• resource usage
• branch mis-prediction rate (approx.) and penalty
• number of instructions
➤ Separate legality and profitability checking
• Easy to tune and extend the cost model
• Utilize parallel compare instructions
• Clean interface
➤ Feasible to change the phase ordering or replace with a new implementation
Features of Predicate Analysis
• Analyze predicate relations among both control flow and explicit predicates
• Query interface to PRDB: disjoint, subset/superset, complementary, sum, difference, probability, …
➤ Currently used during the construction of dependence DAG
• PRDB can be incrementally updated or deleted/re-computed at any phase
• Relations tracked using the well-known predicate partition graph but the analysis not assuming SSA form
• No coupling between the if-conversion and predicate analysis
➤ Can replace just one of them if wish
Predicate Partition Graph
- Partition generated by normal compare type
\[
\begin{align*}
&q_p \\
p_1 &
\end{align*}
\]
\[
(q_p) \quad p_1, p_2 = \text{cmp.unc} \quad \text{<condition>}
\]
- Partitions generated by parallel compares
\[
\begin{align*}
&p
\end{align*}
\]
\[
(q_p) \quad \text{p, .. = cmp.and} \quad \text{<condition>}
\]
\[
\begin{align*}
&p
\end{align*}
\]
\[
(q_p) \quad \text{p, .. = cmp.or} \quad \text{<condition>}
\]
Additional Uses of PRDB
- Predicate-aware data flow analysis (e.g. in register allocation)
- Example in calculating live sets
- Query PRDB to get *Sum* or *Diff* of predicates to refine data-flow solutions
```
Query PRDB to get Sum or Diff of predicates to refine data-flow solutions
BB1:
(p) A = .... ...
BB2:
(x) = A.... ...
BB3:
(y) A = .... ...
```
Global Instruction Scheduling
Hardware Features for Instruction Scheduling
• Wide execution resources:
➤ Itanium: 2 I-units, 2 M-units, 2 F-units and 3 B-units
• Instruction mixes specified by templates
• Large register files:
➤ 128 GRs, 128 FRs, 64 PRs, and 8 BRs
Key Features of Instruction Scheduling
- Based on: D. Berstein, M. Rodeh, “Global Instruction Scheduling for Superscalar Machines,” *PLDI 91*
- Features:
- performs on the scope of SEME regions
- cost function based on frequency-weighted path lengths computed from a region-based dependence DAG
- DAG construction makes use of PRDB
- drives a number of IPF-specific optimizations, e.g. speculation
- integrated with full resource management
- Global and local scheduling share the same implementation with difference in their scopes
Framework of Global Scheduling
• Build a global DAG for an entire SEME region
➤ Nested regions folded with summary info
• Select target BBs to schedule based on their topological ordering and execution frequencies
• For each target BB, identify all source BBs and then candidate instructions
• For each cycle, select ready instructions in their priority to schedule
➤ A forward, cycle scheduling
• Check the availability of machine resources to each selected instruction
Process of Instruction Scheduling
- Critical edge splitting
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
**Process of Instruction Scheduling**
- **Critical edge splitting**
- **Choose target basic block**
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
Process of Instruction Scheduling
- Critical edge splitting
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
\[
\ldots
\]
\[
\text{cmp}4, \text{gt} \ p14, p15 = 0, r11
\]
\[
\text{(p14)} \ \text{sub} \ r15 = 0, r31
\]
\[
\text{(p15)} \ \text{mov} \ r15 = r31
\]
\[
\text{br}. \text{cond} \ .\text{Lt}_0_4
\]
\[
\ld8 \ r10 = [r19]
\]
\[
\text{shladd} \ r9 = r9, 1, r10
\]
\[
\text{st}2 [r9] = r0
\]
\[
\text{br}. \text{Lt}_0_5
\]
\[
\text{br}. \text{cond} \ .\text{Lt}_0_6
\]
\[
\text{ld}4 \ r21 = [r33]
\]
\[
\text{cmp}4, \text{le} \ p0, p9 = r22, r21
\]
\[
\text{(p9)} \ \text{br}.\text{cond} \ .\text{BB5_foo}
\]
Process of Instruction Scheduling
- Critical edge splitting
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
Process of Instruction Scheduling
- Critical edge splitting
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
Process of Instruction Scheduling
• Critical edge splitting
• Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
Process of Instruction Scheduling
• Critical edge splitting
➤ Find source basic blocks
➤ Find candidates
➤ Select best one
➤ Control speculation
• Choose target basic block
➤ Code motion
➤ Motion of code with disjoint predicates
➤ Code motion
➤ Data speculation
```
... cmp4.gt p14,p15=0,r11
(p14) sub r15=0,r31
(p15) mov r15=r31
(p13) br.cond .Lt_0_4
... cmp4.eq p11,p0=1,r14
(p11) br.cond .Lt_0_6
shladd r9=r9,1,r10
st2 [r9]=r0
br .Lt_0_5
ld8.s r10=[r19]
ld4 r21=[r33]
cmp4.le p0,p9=r22,r21
(p9) br.cond .BB5_foo
```
Process of Instruction Scheduling
- Critical edge splitting
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
**Process of Instruction Scheduling**
- Critical edge splitting
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
```
... cmp4.gt p14,p15=0,r11 ld8.s r10=[r19]
(p14) sub r15=0,r31 cmp4.eq p11,p0=1,r14
(p15) mov r15=r31 ...
(p13) br.cond .Lt_0_4
... chk.s r10.Lt_rb_1 ...
shladd r9=r9,1,r10 (p11) br.cond .Lt_0_6
st2 [r9]=r0 ...
br .Lt_0_5 ...
(p11) br.cond .Lt_0_58
... ld4 r21=[r33] ...
cmp4.le p0,p9=r22,r21 (p9) br.cond .BB5_foo
(p9) br.cond .Lt_0_5 ...
```
0.87 0.13 0.01 0.99 1 1
Process of Instruction Scheduling
- Critical edge splitting
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
Process of Instruction Scheduling
- Critical edge splitting
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
Process of Instruction Scheduling
- **Critical edge splitting**
- **Choose target basic block**
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
```
cmp4.gt p14,p15=0,r11
(p14) sub r15=0,r31
ld8.s r10=[r19]
(p15) mov r15=r31
(p13) br.cond .Lt_0_4
chk.s r10 .Lt_rb_1
.shladd r9=r9,1,r10
st2 [r9]=r0
br .Lt_0_5
ld4 r21=[r33]
cmp4.le p0,p9=r22,r21
(p9) br.cond .BB5_foo
```
Process of Instruction Scheduling
- Critical edge splitting
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
Process of Instruction Scheduling
- Critical edge splitting
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
Process of Instruction Scheduling
- Critical edge splitting
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
Process of Instruction Scheduling
- Critical edge splitting
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Control speculation
- Code motion
- Motion of code with disjoint predicates
- Code motion
- Data speculation
Process of Instruction Scheduling (Cont.)
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Code motion
- Compensation code generation
Process of Instruction Scheduling (Cont.)
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Code motion
- Compensation code generation
```
cmp4.gt p14,p15=0,r11
ld8.s r10=[r19]
cmp4.eq p11,p0=1,r14
(p14) sub r15=0,r31
(p15) mov r15=r31
ld4.a r21=[r33]
shladd r9=r9,1,r10
(p13) br.cond .Lt_0_4
```
```
cmp4.le p0,p9=r22,r21
(p9) br.cond .BB5_foo
```
```
ld8.s r10=[r19]
chk.s r10 .Lt_rb_1
st2 [r9]=r0
br .Lt_0_5
```
```
chk.a r21 .Lt_rb_2
cmp4.le p0,p9=r22,r21
(p9) br.cond .BB5_foo
```
```
... chk.s r10 .Lt_rb_1 st2 [r9]=r0
br .Lt_0_5
```
```
... (p11) br.cond .Lt_0_6
```
```
... ...
```
```
... ...
```
```
... (p10) br.cond .Lt_0_58
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
```
... ...
```
Process of Instruction Scheduling (Cont.)
• Choose target basic block
• Find source basic blocks
• Find candidates
• Select best one
• Code motion
• Compensation code generation
```
ld8.s r10=[r19]
ld4.a r21=[r33]
```
```
cmp4,gt p14,p15=0,r11
cmp4.eq p11,p0=1,r14
```
```
(p14) sub r15=0,r31
(p15) mov r15=r31
```
```
shladd r9=r9,1,r10
```
```
(p13) br.cond .Lt_0_4
```
```
cmp4.le p0,p9=r22,r21
```
```
(p9) br.cond .BB5_foo
```
```
chk.a r21 .Lt_rb_2
```
```
chk.a r21 .Lt_rb_2
```
```
cmp4.le p0,p9=r22,r21
```
```
(p9) br.cond .BB5_foo
```
```
chk.s r10 .Lt_rb_1
```
```
br .Lt_0_5
```
```
(st2 [r9]=r0
```
```
(p14) sub r15=0,r31
```
```
(p15) mov r15=r31
```
```
shladd r9=r9,1,r10
```
```
(p13) br.cond .Lt_0_4
```
```
chk.a r21 .Lt_rb_2
```
```
cmp4.le p0,p9=r22,r21
```
```
(p9) br.cond .BB5_foo
```
Process of Instruction Scheduling (Cont.)
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Code motion
- Compensation code generation
```
cmp4.gt p14,p15=0,r11
ld8.s r10=[r19]
cmp4.eq p11,p0=1,r14
sub r15=0,r31
mov r15=r31
shladd r9=r9,1,r10
ld4,a r21=[r33]
```
```
comp4,ge p6,p7=r22,r21
br.cond .BB5_foo
```
Process of Instruction Scheduling (Cont.)
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Code motion
- Compensation code generation
Process of Instruction Scheduling (Cont.)
- Choose target basic block
- Find source basic blocks
- Find candidates
- Select best one
- Code motion
- Compensation code generation
```
cmp4.gt p14,p15=0,r11 ld8.s r10=[r19] cmp4.eq p11,p0=1,r14
(p14) sub r15=0,r31 (p15) mov r15=r31 shladd r9=r9,1,r10
ld4.a r21=[r33] (p13) br.cond .Lt_0_4
... ... ... ... ...
chk.s r10 .Lt_rb_1 st2 [r9]=r0
cmp4.le p0,p9=r22,r21 br .Lt_0_5
0.87
0.13
... ... ... ...
chk.a r21 .Lt_rb_2
(p9) br.cond .BB5_foo
0.01
0.99
0.87
... ...
chk.a r21 .Lt_rb_2
(p9) br.cond .BB5_foo
0.87
0.13
... ...
cmp4.le p0,p9=r22,r21
(p10) br.cond .Lt_0_58
```
```
0.13
0.99
0.01
```
Perspective Research Usage
• Experiment with different scheduling heuristics
• Drive additional IPF optimizations
➤ E.g. post-increment, multiway branch synthesis, …
• Be conscious about register pressure
• Replace it with your own scheduler
• Make it a standalone instruction scheduler
➤ Connect it with other compilation systems, e.g. a binary translation system.
Control and Data Speculation
Architectural Support for Control Speculation
- NaT bits on registers
- Speculative and non-speculative versions of trapping instructions
- Speculative instructions to defer exceptions
- Check instructions, *chk.s*
```assembly
B1: cmp p=(cond) \[Speculative chain\]
(p)br B2
B2: \[ld x = [y]
add z=x,w\]
Rec: \[ld x=[y]
add z=x,w
cmp p=(cond)
(p)br B2
Next: \[chk.s x, Rec\]
```
## Architectural Support for Data Speculation
- Advanced (data speculative) load *ld.a*
- Advanced Load Address Table (ALAT)
- Store invalidates aliasing entries in ALAT
- Check instruction, *chk.a*
```
<table>
<thead>
<tr>
<th>Code</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>st [a1] = b</td>
<td></td>
</tr>
<tr>
<td>ld x = [a2]</td>
<td></td>
</tr>
<tr>
<td>add z = x, w</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td>ld.a x = [a2]</td>
<td></td>
</tr>
<tr>
<td>add z = x, w</td>
<td></td>
</tr>
<tr>
<td>st [a1] = b</td>
<td></td>
</tr>
<tr>
<td>chk.a x, Rec</td>
<td></td>
</tr>
<tr>
<td>Next:</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td>Rec:</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td>ld x = [a2]</td>
<td></td>
</tr>
<tr>
<td>add z = x, w</td>
<td></td>
</tr>
<tr>
<td>br Next</td>
<td></td>
</tr>
</tbody>
</table>
```
Compilation Issues for Control Speculation - Interferences and Live-in Values
• Avoiding interferences for the destination registers of speculated instructions
• Recovering from a deferred exception
Upward-exposed values used in the speculative chain not to be overwritten
\[
\begin{align*}
B1: & \quad \text{cmp } p = (\text{cond}) \\
& \quad \text{(p)br B2} \\
B2: & \quad \text{ld } x = [y] \\
& \quad \text{add } z = x, w \\
& \quad \text{add } y = y + 4
\end{align*}
\]
\[
\begin{align*}
B1: & \quad \text{ld.s } x = [y] \\
& \quad \text{add } z = x, w \\
& \quad \text{cmp } p = (\text{cond}) \\
& \quad \text{(p)br B2} \\
B2: & \quad \text{chk.s } x, \text{Rec} \\
& \quad \text{add } y = y + 4 \\
\text{Next: } & \\
\text{Rec: } & \quad \text{ld } x = [y] \\
& \quad \text{add } z = x, w \\
& \quad \text{br Next}
\end{align*}
\]
Compilation Issues for Control Speculation - ld across check
• Dependent load (to ld.s) scheduled across a check
➤ The load must be put under speculative mode
➤ Deferred exception propagated and not to be signaled before the first check
```
ld.s x = [y]
chk.s x, Rec
ld z = [x]
chk.s z, Rec2
```
```
ld.s x = [y]
ld s z = [x]
[chk.s x, Rec]
chk.s z, Rec2
```
```
ld.s x = [y]
ld s z = [x]
chk.s x, Rec
```
*correct*
*wrong*
Compilation Issues for Data Speculation - Predication
• Excluding predicated code from a data speculative chain when the qualifying predicate is defined on the chain
\[
\begin{align*}
\text{ld.a } x &= [a1] \\
\text{cmp } p &= x < y \\
\text{st } [a2] &= w \\
\text{chk.a } x, \text{ Rec} \\
\text{(p)add } z &= b, c
\end{align*}
\]
\[
\begin{align*}
\text{ld.a } x &= [a1] \\
\text{cmp } p &= x < y \\
\text{(p)add } z &= b, c
\end{align*}
\]
Wrong – z may not be recoverable
\[
\begin{align*}
\text{ld } x &= [a1] \\
\text{cmp } p &= x < y \\
\text{st } [a2] &= w \\
\text{chk.a } x, \text{ Rec} \\
\text{(p)add } z &= b, c
\end{align*}
\]
Cascaded Speculation
- A value defined by a speculative load directly or indirectly feeds into another speculative load scheduled before the first check
- Combinations in cascaded speculation
- Control-speculation-led
- Data-speculation-led
Cascaded Speculation
Control-Speculation-Led
• Different strategies to generate recovery code
• Code size vs. recovery overhead vs. ease of implementation
```
ld.s x = [a]
...
ld.s y = [x]
[(p)chk.s x, Rec1]
c1:
(q)chk.s y, Rec2
c2:
Rec1:
ld x = [a]
...
br c1
Rec2:
ld y = [x]
br c2
```
Strategy 1
```
ld x = [a]
...
ld.s y = [x]
...
br c2
```
Strategy 2
```
ld x = [a]
ld y = [x]
...
br c2
```
Strategy 3
```
ld x = [a]
ld y = [x]
...
br c2
```
Rec2: // only
(w/o 1st chk.s; NaT prop. to y)
ORC Tutorial
Cascaded Speculation
Data-Speculation-Led
• If the first load is mis-speculated, no NaT to propagate the fault
• The first recovery block to invalidate the second chk to ensure the second recovery block executed
```assembly
ld.a x = [a] Rec1:
... Rec1:
ld.sa y = [x] ld x = [a]
(p)chk.a x, Rec1 ... ld y = [x]
c1: br c1 Rec2:
(q)chk.a y, Rec2 invala y correc
c2: br c1
```
76
Scheduling Speculative Instructions
• Speculation is part of DAG-based list scheduling phase
• Marking speculative dependence edges for identified candidates during DAG construction
➤ Control and data speculative edges
• Instruction is ready when all of its non-speculative predecessors scheduled
• Scheduler decides the loads to be speculated
➤ Insert \textit{chk} instruction
➤ Add DAG edges from \textit{chk} to the successors of speculated load to ensure recoverability
Recovery Code Generation
- Recovery code generation decoupled from scheduling phase
- Reduce the complexity of the scheduler
- To generate recovery code
- Starting from the speculative load, follow flow and output dependences to re-identify speculated instructions
- Duplicate the speculated instructions to a recovery block under the non-speculative mode
- Once a recovery block is generated, avoid changes on the speculative chain
- Allow GRA to properly color registers in recovery blocks
Parameterized Machine Model
Machine Model
• Motivations:
➤ To centralize the architectural and micro-architectural details in a well-interfaced module
➤ To facilitate the study of hardware/compiler co-design by changing machine parameters
➤ To ease the porting of ORC to future generations of IPF
• Two aspects:
➤ Parameterized machine descriptions
➤ Micro-scheduler to model resource constraints
Machine Descriptions
• Read in the (micro-)architecture parameters from KAPI (Knobsfile API) published by Intel
↗ E.g. machine width, FU class, latencies, templates, bypass …
↗ In v26-itanium-41-external.knb
• Keep additional hardware specifications in a separate file
↗ E.g. Pro64 opcode, registers, # of issue slots, …
↗ In v11-itanium-extra.knb
• Automatically generate the machine description tables in Pro64
↗ E.g. targ_isa_….[c|h|exported], targ_proc…, topcode…
• Avoid multiple changes of the same info duplicated into different tables
• The machine descriptions can be consumed by various optimization phases
Micro-Scheduler
- Manage resource constraints
- E.g. templates, dispersal rules, FU’s, machine width, …
- Model instruction dispersal rules
- Interact with the high-level instruction scheduler
- Yet to be integrated with SWP
- Reorder instructions within a cycle
- Use a finite state automata (FSA) to model the resource constraints
- Each state represents occupied FU’s
- State transition triggered by incoming scheduling candidate
Modules in Machine Model
Integrated Instruction Scheduler
- High-Level scheduler
- Micro-level scheduler
- Machine description interface
Off-line Machine Model Component
- MM Builder
- FSA Table
- KAPI
- Machine Description File
- Knobsfile
- Func. invocation
- Data path
Functional-unit Based FSA
- Generate FSA prior to compilation
- Model resource constraints
- FSA states based on occupied FU’s for space efficiency
- Each state contains a list of legal template assignments
- Sorted in a priority order, e.g. for code size
- State transition triggered
- Incoming scheduling candidate
- Reordering to obtain the needed FU’s
**Functional-unit Based FSA**
- Template assignment not selected except for
- Intra-cycle (0-cycle) dependence
- Finalizing template assignment with 1-cycle delay
- Able to utilize compressed templates
- For Itanium, 2 bundles per cycle
- ORC FSA has 235 states
- Each state has at most 38 valid template assignments.
- 75% if the states have < 10 assignments
- Changing the machine parameters, e.g. machine width, will generate a new FSA automatically
Functional-unit Based FSA (Example)

LegalTA
num_of_TAs
TAs
MLX MI_I
MFI MI_I
MLX MLX
MLX MFI
MFB MII
MFI MIB
MFI MFI
MFI MLX
MLX MIB
Integrated Instruction Scheduling
- Instruction scheduling integrated with full resource management through micro-scheduler
- Repeatedly pick the best candidate based on scheduling cost function
- Micro-scheduler to make state transition in FSA to check the availability of resources
- Micro-scheduler may permute FU assignments to meet the dispersal rules
- If the resource constraints met, the scheduler can choose to commit the candidate
- If resources fully utilized or no ready candidate available, the scheduler advance to schedule the next cycle
- Template assignment finalized with 1-cycle delayed
Integrated Scheduling - Example
I₁: ld a = [x]
I₂: shl g = h, i
I₃: ld y = [e]
I₄: ld z = [f]
Valid FUs for instructions
I₁, I₃, I₄: M0, M1
I₂: I0
I₅: I0, I1
I₆: M0, M1, I0, I1
FSA state: {}
Intra-cycle dependence (ICD): N or Y
Tentative template assignment (TTA)
High-level Scheduler
- **cyc 0; cand: I₁, I₂, I₃, I₄; sched I₁**
- M₀ to I₁; S={M₀}; ICD: N; TTA=
- **commit I₁; cand: I₅, I₂, I₃, I₄; sched I₅**
- I₀ to I₅; S={M₀, I₀}; ICD: Y; TTA=M₁_I
- **commit I₅; cand: I₂, I₃, I₄; sched I₂**
- Permute FU’s; I₀ to I₂, I₁ to I₅; S={M₀, I₀, I₁}; ICD: Y; TTA=M₁I
- **commit I₂; cand: I₃, I₄; sched I₃**
- M₁ to I₃; S={M₀, M₁, I₀, I₁}; ICD: Y; TTA=M₁I M_MI
- **commit I₃; cand: I₄; sched I₄**
- No M unit for I₄
- **commit I₄; cand: ; final assignment**
- {M₁I: I₁ I₂ I₅} {M_MI I₃; I₄ I₆;}
Micro-level Scheduler
- **cyc 0; cand: I₁, I₂, I₃, I₄; sched I₁**
- M₀ to I₁; S={M₀}; ICD: N; TTA=
- **commit I₁; cand: I₅, I₂, I₃, I₄; sched I₅**
- I₀ to I₅; S={M₀, I₀}; ICD: Y; TTA=M₁_I
- **commit I₅; cand: I₂, I₃, I₄; sched I₂**
- Permute FU’s; I₀ to I₂, I₁ to I₅; S={M₀, I₀, I₁}; ICD: Y; TTA=M₁I
- **commit I₂; cand: I₃, I₄; sched I₃**
- M₁ to I₃; S={M₀, M₁, I₀, I₁}; ICD: Y; TTA=M₁I M_MI
- **commit I₃; cand: I₄; sched I₄**
- No M unit for I₄
- **commit I₄; cand: ; final assignment**
- {M₁I: I₁ I₂ I₅} {M_MI I₃; I₄ I₆;}
- **adv to cyc 1; cand: I₆, I₄; sched I₆**
- M₀ to I₆; S={M₀}; ICD: N; TTA=
- **commit I₆; cand: I₄; sched I₄**
- M₁ to I₄; S={M₀, M₁}; ICD: N; TTA=
- **commit I₄; cand: ; final assignment**
- Permute FU’s for cyc 1; cyc 1 S={M₀, I₀}; ICD: Y; TTA=
• Overview of ORC
• New Infrastructure Features
• New IPF Optimizations
• Research Case Study
• Demo of ORC
• Release and Future Plans
Instruction Scheduling and Resource Management
Integrated vs. Decoupled
Research Case Study
- Want to demonstrate the advantages of scheduling integrated with resource management
- To contrast with decoupled approaches
- Traditional scheduler followed by a separate bundling phase
- Minimize the effort to implement the decoupled approaches
- Build an independent bundling phase using micro-scheduler
- Select template assignments for scheduled instructions
- Honor the cycle breaks placed by the traditional scheduler
- Trivial effort
- More powerful than the handle_hazard() in Pro64
Scheduling/Bundling Approaches
• Level 0: scheduler w/o any resource management + separate bundling
• Level 1: scheduler w/ machine width constraint + separate bundling
• Level 2: scheduler w/ constraints on width and FU’s + separate bundling
• Level 3: integrated scheduling and resource management
➤ Template assignment, dispersal rules, …
• Perform experiments to collect various data on all SPEC CPU2000* integer programs
* Other names and brands may be claimed as the property of others
Speedup of Execution Time
* Source: CAS
Some Observations
• Little performance change at levels 0-2
• IPC well below machine width
➡ The width constraint at level 1 not critical
• FU utilization also low
➡ The FU constraints at level 2 not critical either
• Level 3 shows an impressive average 11% improvement
➡ IPC improved by 13%
➡ NOP ratio, bundles per cycle, instruction count all increase
➡ Gain parallelism at the cost of code size
➡ Still an overall performance win!
* Performance measurement disclaimer
Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, reference www.intel.com/procs/perf/limits.htm or call (U.S.) 1-800-628-8686 or 1-916-356-3104.
Compilation Time
* Source: CAS
More Observations
- Compilation time includes the time for global and local scheduling and bundling
- Bundling time is typically well below 10%
- Within 5% differences for all levels
- Important to manage all resource constraints during scheduling
- Our integrated scheduling approach provides
- Good performance improvements over decoupled approaches
- Time and space efficiency
- Possibly apply to other architectures:
- VLIW, DSP, superscalar w/ complex dispersal rules, …
Lessons Learned
• Trivial effort to implement and plug-in alternatives for research study
➤ Modularized design, clean interface, …
• Robust compiler infrastructure to run sizeable benchmarks, such as CPU2K and other applications
• Focus effort on studying the key research problem and collecting experimental results for in-depth analysis
➤ Minimize the effort on the rest infrastructure issues
• This study “On-the-fly Resource Management during Instruction Scheduling for the EPIC Architecture” is submitted to PLDI 2002.
• Overview of ORC
• New Infrastructure Features
• New IPF Optimizations
• Research Case Study
• Demo of ORC
• Release and Future Plans
• Overview of ORC
• New Infrastructure Features
• New IPF Optimizations
• Research Case Study
• Demo of ORC
• Release and Future Plans
Release and Future Plan
Agenda
• First release report
• Second release and beyond
• Licensing, distribution and support
• User groups
• Contributing organizations and individuals
First Release Report
State of ORC
• -O0 and –g go through Pro64 path
• Supported optimization levels
↗ -O2 (global scalar opt, if-conversion, global scheduling, simple array dep. analysis, GRA, SWP, unrolling, …)
↗ -O3 (all –O2 optimizations, loop nest opt., aggressive array dep. analysis, more global scalar opt)
↗ Various profiling at code generation time:
• Edge profiling
• Value profiling
• Memory operation profiling and distribution
Performance
- **Pro64 standing:**
- About 5% - 10% **better** than GCC (2.96) at O2 and O3
- About 10% - 15% **slower** than Intel IPF Compiler (5.0 and 6.0 Beta) at O2 and O3
- Seen extreme cases both ways
- **ORC standing:**
- Focus is on correctness and infrastructure for this time
- Performance is better than Pro64 at O2 and O3
*Performance measurement disclaimer*
Testing Status
• Well tested at \(-O2\) and \(-O3\) level for general purpose applications.
➤ Tests/suits passed:
• Stanford, Olden, Jpeg, Mesa, ADPCM, CPU2000int…
• Adequately tested at \(-ipa\) (with \(-O2\) and \(-O3\))
➤ Pro64 has 4 failures with CPU2000int
➤ ORC in par with Pro64 correctness-wise
➤ Will fix in second release
Testing Status (cont’d)
• Scientific programs:
➤ Adequately tested, but not enough
• CPU2000fp has 4 known problems at O2 and O3 as we speak, plan to fix ASAP
• Linpack, Livermore loops passed
• Perfect club not tried
➤ Not major focus for our limited resource
Usage Model
• Invoking ORC:
- orcc hello.c –o hello { -O2 | -O3 }
- orc++ hello.cpp –o hello { -O2 | -O3 }
- orf90 hello.f90 –o hello { -O2 | -O3 }
- orf90 hello.f -o hello { -O2 | -O3 }
• Skipping ORC:
- Add option:
• -ORC:=off
• Reverts the compiler to be the same as Pro64
Second Release and Beyond
Planned Features
Key concentration for second release
• Performance – *general purpose apps* only
➦ O2 / O3 comparable to best Itanium compiler
• To ensure solid infrastructure framework
➦ Sufficient peak performance to make research results trustworthy
• Infrastructure for research remains the key
➦ No benchmark tricks
➦ No micro-architectural tuning that cannot be translated cleanly into other uArch
Optimizations
• Memory optimizations
➣ E.g. data prefetching, various profiling extensions, …
• Better utilization of IPF features
➣ E.g. post-increment, predicate aware in various phases, …
• Inter-procedural analysis
➣ E.g. aggressive alias analysis, …
• Scheduling/speculation
➣ E.g. tune down aggressiveness, partial ready code motion, …
Infrastructure Features
• Multithread support
➢ Multithread centric region formation
➢ Code motion barriers without disabling optimization
• Annotations of binaries
➢ co-design of architecture and compiler
• Interface to simulators
➢ Architectural studies
➢ Plan to work with Liberty and/or other simulators
Fix Existing Issues
• Existing Pro64 issues
- IPA not fully functional
- Register pressure too high for IPF
- Compiler binary not native built
- Other tuning issues
- SWP, inlining,…
- Fix extreme cases compared with other compilers
• Existing ORC issues
- Monitor compile time in scheduler
- Instrumented binary overhead not minimized
Release Timeline
- **Jan 2002**
- First source/binary release
- **Middle of 2002**
- Compiler stable with IPA
- Some benchmark performance comparable to best Itanium compilers
- **End of 2002**
- Performance goal achieved
- Infrastructure features
All dates specified are preliminary, for planning purposes only, and are subject to change without notice.
Issues Not Planned to Address
• Integration with 3.0 gcc
• FP performance
• Compiler bootstrap itself
• Linux build using ORC
• F90 frontend or library problems
Inviting Research based on ORC
- Performance-driven optimizations
- Thread level parallelism
- Co-design of architecture and compiler
- Retarget to type-safe language (CLI, Java, …)
- Type-safety through aggressive optimizations
- Component level optimizations (.Net, .so, …)
- Higher typed optimizations (user defined types, operators, …)
Inviting Research based on ORC
- Optimization for memory hierarchy
- Co-design of static and dynamic compilation
- Program analysis
- Context sensitive and flow sensitive alias analysis
- Type hierarchy
- …
- Inlining/outlining/partial inlining
- Power management
- …
Source and Binary Structure
• Source tree structure
➔ Same as Pro64 source organization
![Diagram]
- be
- ... opt, lno, com, ...
- cg
- ... orc
- ... ia64, ...
Backend Binary Components
BE
be.so wopt.so lno.so cg.so ia64.so
orc_ipf.so orc_infra.so
ORC Testing Infrastructure
- Testing model
- Developer written tests for specific optimization
- White box tests for his specific component
- Developer written tests for integration
- White box tests for his component working well with other optimizations and components
- Various open source test suites for black box testing
- Open source include simple Perl script to run tests at various levels defined by the development team.
ORC Testing Infrastructure
- Simple Perl script to run checkin and/or nightly extensive testing
- Can specify default options and required options
- Required option to ensure specific opt turned on
- Default option can be overridden to enable testing by permuting options
- Can specify running of entire test suites at specified options
Development Aids
• Debugging
↩ gdb, xxgdb
↩ Traces:
• Before and after optimization IR dump
• Detail traces of optimization/analysis info and decisions
↩ Log:
• Log of what optimizations performed at what phase
↩ Various IR and symbol table dumping tools
↩ Elaborate interface to daVinci graph by ORC:
• Display inside gdb of cfg, regions, BB, …
• Similar display in files through command-line options
Development Aids
• Debugging (cont’d)
↗ Triage tools
• How to debug file from Pro64 0.13 release:
“howto-debug-compiler”
• Major components built as “.so”s
↗ Easy to pinpoint component for regressions
• All optimizations can be turned on/off
↗ Easy to pinpoint which opt. that triggers the problem
• Turn off optimization in reverse phase order
Development Aids
• Debugging (cont’d)
↗ Triage tools
• Automatic tool by ORC
↗ Can pinpoint file, function, BB, expression, region or instruction level where bug is manifested.
↗ Can pinpoint which component, optimization where bug is manifested
Usage:
`tria ge.pl -iset test -phase speculation -f gzip.mk`
iset: input test set
phase: optimization phase to narrow down
Development Aids
• Expose bugs at compile time philosophy
¬ Heavy use of **assertions**
¬ Optional **devwarns** for potential problems and temporary workarounds that might have forgotten
¬ Heavy use of verification tools and **verifiers** of all sorts
• Styles and coding convention document in first release:
¬ Stay close to Pro64 coding style and explanation
¬ Include “how to” for memory management, asserts, compile time accounting,…
Development Aids
• Performance analysis and regression tracking
➤ PFMON available from
• ftp://ftp.hpl.hp.com/pub/linux-ia64/pfmon-0.06.tar.gz
• Hardware counters
• PC sampling
➤ Cycle counting tools by ORC
• Static estimation of cpu execution time
➤ Instrumentation/profile tools
• Dynamic runtime cycle count estimation
• Memory distribution analyzer
PFMON Data
Source: Intel
Development Aids
• Simulators
➤ NUE
• IPF functional simulator from website: http://www.software.hp.com/ia64linux
➤ Liberty
David August, Princeton
• Open for interface to other simulators
➤ Cache simulator?
➤ Simple Scalar?
➤ Others (any takers?)
Licensing, distribution and support
Licensing, Distribution and Support
• The compiler will be distributed in the web-site
http://sourceforge.net/projects/ipf-orc
• Latest information and update are placed there also
• Distribution includes:
❖ Binary
❖ Source Code
❖ Test and triage infrastructure including scripts
❖ Documents and various tools
Licensing
BE
be wopt lno cg
orc
infrastructure opt
GNU gpl
BSD
Licensing
- BSD license url
http://www.opensource.org/licenses/bsd-license.html
Distribution
- Download:
http://prdownloads.sourceforge.net/ipf-orc/orc-1.0.0.tgz
☀ Compiler binary are IA32 images (cross built)
☀ Will run slow on an Itanium machine
Distribution and Installation
• To install on an Itanium machine (Redhat 7.1):
%su
./install.sh
• To install under nue:
%su
#nue
#./install.sh
Distribution and Installation
• To do on an IA32 machine
➤ cross build, easiest is to use *nue*
➤ Be careful about library compatibility issues
• *nue* is not 7.1 and up compatible
➤ Don’t pick up IA32 includes, libraries and .so’s
• Cross compile on an IA32 machine under *nue*
➤ IA32 side:
• orcc –c file.c –o file.o
➤ Produce object file
• ftp IA64
➤ Transfer object files to itanium machine
➤ IA64 side:
• orcc file.o –o exec
➤ Produce binary with right libraries
• ./exec
Support
- For issues with installation, compiler usage, use mail alias:
*ipf-orc-support@lists.sourceforge.net*
- Please sign on to
*ipf-orc-support@lists.sourceforge*
Support
• Bugs can be reported via Sourceforge in the website
➤ Select support requests under tracker
➤ Choose submit new
• Will fix ORC specific problems
• Cannot promise to fix Pro64™ problems
Reporting Problems
• You can help resolving problems quickly
➡ Give precise characterization of problem/symptom
➡ Give detailed description of
• Compile and optimization options
• Command to execute application if runtime error
➡ Include
• fully preprocessed (cpp) source to reproduce problem
• Other input files such as data files needed to run
➡ Reduce your test case to as small as possible
➡ Use triage tool and/or follow how-to-find-problems to narrow down possible culprit
Accepting Contributions
• Contribute to the source code
➣ No clear policy on how to accept changes yet
➣ Welcome suggestions
➣ Will look at how Open64 user group operates
➣ Will post policy on websites when decided
User Groups
User Groups
ORC user group
ipf-orc-support@lists.sourceforge
Pro64™ user group
open64-devel-support@lists.sourceforge
Open64 User Forum
Steering Committee:
Guang R. Gao
Jose Nelson Amaral
Date/Time: tonight, 8:00p.m. – 10:00 p.m.
Place: Marriott Hotel
Contributing Organizations and Individuals
This project is a joint development effort between
*Microprocessor Research Labs,*
*Intel Research Labs*
*&&*
*Institute of Computing Technology,*
*Chinese Academy of Sciences*
Contributing individuals
ICT
- Xiqian Dong
- Ge Gan
- Ruiqi Lian
- Lixia Liu
- Yang Liu
- Fei Long
- Fang Lu
- Yunzhao Lu
- Shuxin Yang
- Chen Fu *
- Ren Li *
- Zhanglin Liu *
- Chengyong Wu
- Lizhe Wang *
- Shukang Zhou *
- Kexin Zhang
- Zhaoqing Zhang
* No longer at ICT
Contributing individuals
Intel
Sun Chan
Kaiyu Chen
BuiQi Cheng
Jesse Fang
Roy Ju
Tony Tuo
QingYu Zhao
DongYuen Chen
William Chen *
ZhaoHui Du
Tao Huang
Tin-Fook Ngai
YouFeng Wu
Dagen Wong *
* No longer at Intel
Thanks to Pro64 developers
Murthy Chandrasekhar Lilian Leung
Fred Chow Raymond Lo
Robert Cox ShinMing Liu
Peter Dahl Wilson Ho
Alban Douillet Zhao Peng
Ken Lesniak HongBo Rong
Jin Lin David Stephenson
Mike Murphy Peng Tu
Ross Towle
|
{"Source-Url": "https://webdocs.cs.ualberta.ca/~amaral/courses/680/orc/ORC-tutorial.pdf", "len_cl100k_base": 14386, "olmocr-version": "0.1.49", "pdf-total-pages": 147, "total-fallback-pages": 0, "total-input-tokens": 220177, "total-output-tokens": 20154, "length": "2e13", "weborganizer": {"__label__adult": 0.00038242340087890625, "__label__art_design": 0.0005044937133789062, "__label__crime_law": 0.0002701282501220703, "__label__education_jobs": 0.0008268356323242188, "__label__entertainment": 8.124113082885742e-05, "__label__fashion_beauty": 0.0002036094665527344, "__label__finance_business": 0.0003893375396728515, "__label__food_dining": 0.0003333091735839844, "__label__games": 0.0011548995971679688, "__label__hardware": 0.01071929931640625, "__label__health": 0.00035953521728515625, "__label__history": 0.00026702880859375, "__label__home_hobbies": 0.00016832351684570312, "__label__industrial": 0.0012073516845703125, "__label__literature": 0.0001569986343383789, "__label__politics": 0.00023233890533447263, "__label__religion": 0.000621795654296875, "__label__science_tech": 0.0396728515625, "__label__social_life": 5.716085433959961e-05, "__label__software": 0.00940704345703125, "__label__software_dev": 0.931640625, "__label__sports_fitness": 0.0003733634948730469, "__label__transportation": 0.0006427764892578125, "__label__travel": 0.00018644332885742188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50707, 0.03115]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50707, 0.16521]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50707, 0.73159]], "google_gemma-3-12b-it_contains_pii": [[0, 233, false], [233, 664, null], [664, 680, null], [680, 1047, null], [1047, 1429, null], [1429, 1882, null], [1882, 2564, null], [2564, 2806, null], [2806, 3285, null], [3285, 3678, null], [3678, 3979, null], [3979, 4378, null], [4378, 4725, null], [4725, 4860, null], [4860, 4885, null], [4885, 5296, null], [5296, 5889, null], [5889, 6236, null], [6236, 6716, null], [6716, 7464, null], [7464, 7492, null], [7492, 7591, null], [7591, 8076, null], [8076, 8103, null], [8103, 8555, null], [8555, 8879, null], [8879, 9145, null], [9145, 9509, null], [9509, 9897, null], [9897, 10192, null], [10192, 10327, null], [10327, 10364, null], [10364, 10800, null], [10800, 11333, null], [11333, 11604, null], [11604, 11990, null], [11990, 12433, null], [12433, 12987, null], [12987, 13432, null], [13432, 13792, null], [13792, 13822, null], [13822, 14064, null], [14064, 14610, null], [14610, 15086, null], [15086, 15337, null], [15337, 15628, null], [15628, 16426, null], [16426, 16705, null], [16705, 16984, null], [16984, 17263, null], [17263, 17804, null], [17804, 18083, null], [18083, 18817, null], [18817, 19096, null], [19096, 19375, null], [19375, 19893, null], [19893, 20172, null], [20172, 20451, null], [20451, 20730, null], [20730, 21009, null], [21009, 21198, null], [21198, 22295, null], [22295, 23138, null], [23138, 23505, null], [23505, 23694, null], [23694, 24389, null], [24389, 24760, null], [24760, 24914, null], [24914, 25328, null], [25328, 26130, null], [26130, 26973, null], [26973, 27407, null], [27407, 28054, null], [28054, 28300, null], [28300, 28837, null], [28837, 29337, null], [29337, 29818, null], [29818, 30317, null], [30317, 30345, null], [30345, 30726, null], [30726, 31358, null], [31358, 31800, null], [31800, 32077, null], [32077, 32440, null], [32440, 32904, null], [32904, 33086, null], [33086, 33693, null], [33693, 33966, null], [33966, 35311, null], [35311, 35446, null], [35446, 35519, null], [35519, 36044, null], [36044, 36540, null], [36540, 36581, null], [36581, 37067, null], [37067, 37663, null], [37663, 37695, null], [37695, 38180, null], [38180, 38712, null], [38712, 38847, null], [38847, 38982, null], [38982, 39163, null], [39163, 39184, null], [39184, 39623, null], [39623, 40007, null], [40007, 40353, null], [40353, 40631, null], [40631, 40936, null], [40936, 40962, null], [40962, 41382, null], [41382, 41736, null], [41736, 42055, null], [42055, 42411, null], [42411, 42780, null], [42780, 42942, null], [42942, 43283, null], [43283, 43558, null], [43558, 43723, null], [43723, 43814, null], [43814, 44263, null], [44263, 44605, null], [44605, 45038, null], [45038, 45408, null], [45408, 45793, null], [45793, 46243, null], [46243, 46627, null], [46627, 46653, null], [46653, 46921, null], [46921, 46957, null], [46957, 47280, null], [47280, 47353, null], [47353, 47436, null], [47436, 47615, null], [47615, 47776, null], [47776, 48303, null], [48303, 48483, null], [48483, 48685, null], [48685, 49191, null], [49191, 49415, null], [49415, 49427, null], [49427, 49549, null], [49549, 49691, null], [49691, 49734, null], [49734, 49916, null], [49916, 50192, null], [50192, 50406, null], [50406, 50707, null]], "google_gemma-3-12b-it_is_public_document": [[0, 233, true], [233, 664, null], [664, 680, null], [680, 1047, null], [1047, 1429, null], [1429, 1882, null], [1882, 2564, null], [2564, 2806, null], [2806, 3285, null], [3285, 3678, null], [3678, 3979, null], [3979, 4378, null], [4378, 4725, null], [4725, 4860, null], [4860, 4885, null], [4885, 5296, null], [5296, 5889, null], [5889, 6236, null], [6236, 6716, null], [6716, 7464, null], [7464, 7492, null], [7492, 7591, null], [7591, 8076, null], [8076, 8103, null], [8103, 8555, null], [8555, 8879, null], [8879, 9145, null], [9145, 9509, null], [9509, 9897, null], [9897, 10192, null], [10192, 10327, null], [10327, 10364, null], [10364, 10800, null], [10800, 11333, null], [11333, 11604, null], [11604, 11990, null], [11990, 12433, null], [12433, 12987, null], [12987, 13432, null], [13432, 13792, null], [13792, 13822, null], [13822, 14064, null], [14064, 14610, null], [14610, 15086, null], [15086, 15337, null], [15337, 15628, null], [15628, 16426, null], [16426, 16705, null], [16705, 16984, null], [16984, 17263, null], [17263, 17804, null], [17804, 18083, null], [18083, 18817, null], [18817, 19096, null], [19096, 19375, null], [19375, 19893, null], [19893, 20172, null], [20172, 20451, null], [20451, 20730, null], [20730, 21009, null], [21009, 21198, null], [21198, 22295, null], [22295, 23138, null], [23138, 23505, null], [23505, 23694, null], [23694, 24389, null], [24389, 24760, null], [24760, 24914, null], [24914, 25328, null], [25328, 26130, null], [26130, 26973, null], [26973, 27407, null], [27407, 28054, null], [28054, 28300, null], [28300, 28837, null], [28837, 29337, null], [29337, 29818, null], [29818, 30317, null], [30317, 30345, null], [30345, 30726, null], [30726, 31358, null], [31358, 31800, null], [31800, 32077, null], [32077, 32440, null], [32440, 32904, null], [32904, 33086, null], [33086, 33693, null], [33693, 33966, null], [33966, 35311, null], [35311, 35446, null], [35446, 35519, null], [35519, 36044, null], [36044, 36540, null], [36540, 36581, null], [36581, 37067, null], [37067, 37663, null], [37663, 37695, null], [37695, 38180, null], [38180, 38712, null], [38712, 38847, null], [38847, 38982, null], [38982, 39163, null], [39163, 39184, null], [39184, 39623, null], [39623, 40007, null], [40007, 40353, null], [40353, 40631, null], [40631, 40936, null], [40936, 40962, null], [40962, 41382, null], [41382, 41736, null], [41736, 42055, null], [42055, 42411, null], [42411, 42780, null], [42780, 42942, null], [42942, 43283, null], [43283, 43558, null], [43558, 43723, null], [43723, 43814, null], [43814, 44263, null], [44263, 44605, null], [44605, 45038, null], [45038, 45408, null], [45408, 45793, null], [45793, 46243, null], [46243, 46627, null], [46627, 46653, null], [46653, 46921, null], [46921, 46957, null], [46957, 47280, null], [47280, 47353, null], [47353, 47436, null], [47436, 47615, null], [47615, 47776, null], [47776, 48303, null], [48303, 48483, null], [48483, 48685, null], [48685, 49191, null], [49191, 49415, null], [49415, 49427, null], [49427, 49549, null], [49549, 49691, null], [49691, 49734, null], [49734, 49916, null], [49916, 50192, null], [50192, 50406, null], [50406, 50707, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50707, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50707, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50707, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50707, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50707, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50707, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50707, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50707, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50707, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50707, null]], "pdf_page_numbers": [[0, 233, 1], [233, 664, 2], [664, 680, 3], [680, 1047, 4], [1047, 1429, 5], [1429, 1882, 6], [1882, 2564, 7], [2564, 2806, 8], [2806, 3285, 9], [3285, 3678, 10], [3678, 3979, 11], [3979, 4378, 12], [4378, 4725, 13], [4725, 4860, 14], [4860, 4885, 15], [4885, 5296, 16], [5296, 5889, 17], [5889, 6236, 18], [6236, 6716, 19], [6716, 7464, 20], [7464, 7492, 21], [7492, 7591, 22], [7591, 8076, 23], [8076, 8103, 24], [8103, 8555, 25], [8555, 8879, 26], [8879, 9145, 27], [9145, 9509, 28], [9509, 9897, 29], [9897, 10192, 30], [10192, 10327, 31], [10327, 10364, 32], [10364, 10800, 33], [10800, 11333, 34], [11333, 11604, 35], [11604, 11990, 36], [11990, 12433, 37], [12433, 12987, 38], [12987, 13432, 39], [13432, 13792, 40], [13792, 13822, 41], [13822, 14064, 42], [14064, 14610, 43], [14610, 15086, 44], [15086, 15337, 45], [15337, 15628, 46], [15628, 16426, 47], [16426, 16705, 48], [16705, 16984, 49], [16984, 17263, 50], [17263, 17804, 51], [17804, 18083, 52], [18083, 18817, 53], [18817, 19096, 54], [19096, 19375, 55], [19375, 19893, 56], [19893, 20172, 57], [20172, 20451, 58], [20451, 20730, 59], [20730, 21009, 60], [21009, 21198, 61], [21198, 22295, 62], [22295, 23138, 63], [23138, 23505, 64], [23505, 23694, 65], [23694, 24389, 66], [24389, 24760, 67], [24760, 24914, 68], [24914, 25328, 69], [25328, 26130, 70], [26130, 26973, 71], [26973, 27407, 72], [27407, 28054, 73], [28054, 28300, 74], [28300, 28837, 75], [28837, 29337, 76], [29337, 29818, 77], [29818, 30317, 78], [30317, 30345, 79], [30345, 30726, 80], [30726, 31358, 81], [31358, 31800, 82], [31800, 32077, 83], [32077, 32440, 84], [32440, 32904, 85], [32904, 33086, 86], [33086, 33693, 87], [33693, 33966, 88], [33966, 35311, 89], [35311, 35446, 90], [35446, 35519, 91], [35519, 36044, 92], [36044, 36540, 93], [36540, 36581, 94], [36581, 37067, 95], [37067, 37663, 96], [37663, 37695, 97], [37695, 38180, 98], [38180, 38712, 99], [38712, 38847, 100], [38847, 38982, 101], [38982, 39163, 102], [39163, 39184, 103], [39184, 39623, 104], [39623, 40007, 105], [40007, 40353, 106], [40353, 40631, 107], [40631, 40936, 108], [40936, 40962, 109], [40962, 41382, 110], [41382, 41736, 111], [41736, 42055, 112], [42055, 42411, 113], [42411, 42780, 114], [42780, 42942, 115], [42942, 43283, 116], [43283, 43558, 117], [43558, 43723, 118], [43723, 43814, 119], [43814, 44263, 120], [44263, 44605, 121], [44605, 45038, 122], [45038, 45408, 123], [45408, 45793, 124], [45793, 46243, 125], [46243, 46627, 126], [46627, 46653, 127], [46653, 46921, 128], [46921, 46957, 129], [46957, 47280, 130], [47280, 47353, 131], [47353, 47436, 132], [47436, 47615, 133], [47615, 47776, 134], [47776, 48303, 135], [48303, 48483, 136], [48483, 48685, 137], [48685, 49191, 138], [49191, 49415, 139], [49415, 49427, 140], [49427, 49549, 141], [49549, 49691, 142], [49691, 49734, 143], [49734, 49916, 144], [49916, 50192, 145], [50192, 50406, 146], [50406, 50707, 147]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50707, 0.01646]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
6a88faa14378261523276cbe598fe67c22a743b1
|
[REMOVED]
|
{"Source-Url": "https://inria.hal.science/hal-03329311/file/Contract_Framework.pdf", "len_cl100k_base": 11251, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 54014, "total-output-tokens": 14313, "length": "2e13", "weborganizer": {"__label__adult": 0.0005097389221191406, "__label__art_design": 0.0008306503295898438, "__label__crime_law": 0.0005488395690917969, "__label__education_jobs": 0.00211334228515625, "__label__entertainment": 0.00016391277313232422, "__label__fashion_beauty": 0.00025773048400878906, "__label__finance_business": 0.0005984306335449219, "__label__food_dining": 0.000629425048828125, "__label__games": 0.0012493133544921875, "__label__hardware": 0.0017290115356445312, "__label__health": 0.0012655258178710938, "__label__history": 0.000537872314453125, "__label__home_hobbies": 0.00026035308837890625, "__label__industrial": 0.0011434555053710938, "__label__literature": 0.0007305145263671875, "__label__politics": 0.000484466552734375, "__label__religion": 0.0007700920104980469, "__label__science_tech": 0.347412109375, "__label__social_life": 0.0001832246780395508, "__label__software": 0.00720977783203125, "__label__software_dev": 0.62890625, "__label__sports_fitness": 0.0004329681396484375, "__label__transportation": 0.00157928466796875, "__label__travel": 0.0002892017364501953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49707, 0.03376]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49707, 0.56259]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49707, 0.83679]], "google_gemma-3-12b-it_contains_pii": [[0, 1019, false], [1019, 3807, null], [3807, 6842, null], [6842, 9769, null], [9769, 12605, null], [12605, 15391, null], [15391, 17638, null], [17638, 20563, null], [20563, 23325, null], [23325, 26224, null], [26224, 29260, null], [29260, 32211, null], [32211, 35066, null], [35066, 36824, null], [36824, 39098, null], [39098, 41504, null], [41504, 44430, null], [44430, 47888, null], [47888, 49707, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1019, true], [1019, 3807, null], [3807, 6842, null], [6842, 9769, null], [9769, 12605, null], [12605, 15391, null], [15391, 17638, null], [17638, 20563, null], [20563, 23325, null], [23325, 26224, null], [26224, 29260, null], [29260, 32211, null], [32211, 35066, null], [35066, 36824, null], [36824, 39098, null], [39098, 41504, null], [41504, 44430, null], [44430, 47888, null], [47888, 49707, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49707, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49707, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49707, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49707, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49707, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49707, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49707, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49707, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49707, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49707, null]], "pdf_page_numbers": [[0, 1019, 1], [1019, 3807, 2], [3807, 6842, 3], [6842, 9769, 4], [9769, 12605, 5], [12605, 15391, 6], [15391, 17638, 7], [17638, 20563, 8], [20563, 23325, 9], [23325, 26224, 10], [26224, 29260, 11], [29260, 32211, 12], [32211, 35066, 13], [35066, 36824, 14], [36824, 39098, 15], [39098, 41504, 16], [41504, 44430, 17], [44430, 47888, 18], [47888, 49707, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49707, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
45d73820f4cbbe5ede71da6da8de239899d6f943
|
Epistemic modelling and protocol dynamics
Wang, Y.
Citation for published version (APA):
General rights
It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).
Disclaimer/Complaints regulations
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.
Chapter 4
Logics of Knowledge and Protocol Change
4.1 Introduction
In Chapter 3 we have shown that knowing that an epistemic protocol would fulfil a certain goal may affect the verification of the protocol. In this chapter, we draw the attention to the knowledge and the dynamics of protocols. As we motivated in Chapter 1, knowing a protocol means knowing what to do [HF89] and knowing the meaning carried by actions according to the protocols [PR03]. In this chapter, we will make these two observations more precise. More importantly, we address the problem “how to know a protocol?” by modelling the dynamics of protocols.
**Knowing what to do** In the framework $L_{AP}$ of the previous chapter, a promising candidate for expressing that an agent knows what to do according to an announcement protocol $\pi$ is the formula $K_i(\langle \pi \rangle \top)$. However, due to the semantics of $L_{AP}$, $K_i(\langle \pi \rangle \top)$ only says that agent $i$ knows that $\pi$ can be executed at any world he considers possible according to the inherent preconditions of the announcements in $\pi$. For example, in the Muddy Children scenario (Example 1.1.2) the assumed protocol is to (repeatedly) announce whether you yourself are muddy, and clearly you know you can announce it. However, there are many other true propositions that could be announced by an agent. For example, $K_i(\langle ![m_i] + ![\neg m_i] \rangle \top)$ is valid in the model for $i \neq j$, but $![m_i] + ![\neg m_i]$ is not part of the intended protocol. Clearly, we need a constraint telling us which announcements are in accordance with the protocol, in other words, we need to model the role of the father as in the original story of the Muddy Children.
The existing work on protocols in DEL enriches the epistemic models with explicit protocols such that the possible behaviours of agents are not only restricted by the inherent preconditions of epistemic events but also restricted by protocol information [HY09, vBGHP09, Hos09, HP10b]. This is similar to the treatment of protocols in ETL [HF89, PR03], where the temporal development of a system is generated from an initial situation by a commonly known protocol. In this chapter we take a different approach: we precisely model the role of the father as in the Muddy Children.
However, the framework in [vBGHP09] can also handle the protocols which are not common known.
scenario by introducing protocol announcements \([!\pi]\) in the language. For example, we use \([!(a \cdot b)](b)\phi\) to express: after the announcement of the protocol \(a \cdot b\), \(b\) can not be executed as the first event\(^2\). The semantics of the language with protocol announcements is defined on standard Kripke models. The extra protocol information is only introduced by protocol announcements while evaluating a formula. Such an approach makes it possible to not only model the “installation” of the initial protocol explicitly but also to handle protocol changes during the execution of the current protocol: we model a true father who may change his mind.
The dynamics of protocols often occur in social interactions. For example, imagine that you were told to close the door and on your way to do it you are told again not to close it. Also as we mentioned in Chapter 1, someone from France may need to update his protocol on cheek kissing when living in Holland. As another example, let consider the yes-no questions which can be viewed as protocols announced by the questioner: answer “Yes!” or answer “No!”. In dialogues, a well-trained spokesman may respond to a yes-no question by inserting yet another protocol: “before answering your question, tell me what you meant by \(\phi\).”
Knowing what the actions mean The dynamics of the protocols that carry meanings for actions are even more interesting. Here is yet another example: the Chinese are non-confrontational in the sense that they will not overtly say “no”, instead they say “I will think about it” or “we will see”. For a western businessman, “we will see”, according to the standard interpretation, means it is still possible. However, if he is updated with the Chinese protocol: ?\(p_{no} \cdot a_{will-see}\) then he should see this is just another way of saying “No”. Note that in the standard DEL, the interpretations of events are fixed and implicitly assumed to be common knowledge, e.g., in PAL an announcement \(!\phi\) is assumed to have an inherent meaning: \(\phi\) is true. This is because the semantic objects (event models) are explicitly included in the syntax as in the general DEL framework. However, the same utterance \(!\phi\) (syntax) may carry different meanings (semantics) as we have seen in the we-will-see example. A closer look at public announcements should separate the utterances and their meanings. In fact, an utterance \(a\) only carries the meaning \(\phi\) if the hearer knows that the protocol ?\(p_{no} \cdot a_{will-see}\) is carried out (cf. [PR03] for a detailed rationale).
To handle the protocols that carry meanings for actions, it is inevitable to introduce tests in the protocol programming language. Intuitively, the tests are not observable by the agents, unless announced previously, e.g., \([?p_{no} \cdot a_{will-see}]K_p_{no}\) should not be valid while \([!(?p_{no} \cdot a_{will-see})][?p_{no} \cdot a_{will-see}]K_p_{no}\) should be valid. We define the formal semantics for this enriched language in this chapter.
\(^2\)Here we assume that if the protocol is announced then it is followed by all the agents. See [PS10] for an interesting discussion on “knowingly following the protocol” by agents in a setting of imperfect information.
4.2 Basic Logic $\mathcal{PDL}^!$
The formulas of $\mathcal{PDL}^!$ are built from the set of basic proposition letters $P$ and the set of atomic actions $\Sigma$ as follows:
$$\begin{align*}
\phi & ::= \top | p | \neg \phi | \phi \land \phi | [\pi] \phi | ([\pi] \phi) \\
\pi & ::= 1 | 0 | a | \pi \cdot \pi | \pi + \pi | \pi^*
\end{align*}$$
where $p \in P$ and $a \in \Sigma$. The intended meaning of the formulas is mostly as in $\mathcal{PDL}$, but “in context” of the protocol constraints: $[\pi] \phi$ now says that “after any run of the
program $\pi$ which is allowed by the current protocol, $\phi$ holds". The new formula $[\pi]w\phi$ expresses "after the announcement of the new protocol $\pi$, $\phi$ holds."
To give the semantics to PDL\textsuperscript{1}, we first recall a useful notion of regular expressions. The input derivative $\pi\setminus w$ of the regular expression $\pi \in \text{Reg}_\Sigma$ is defined as $L(\pi\setminus w) = \{v \mid aw \in L(\pi)\}$. With the output function $o : \text{Reg}_\Sigma \rightarrow \{0, 1\}$ we can axiomatize $\setminus$ (cf. [Brz64, Con71]):
$$\pi = o(\pi) + \sum_{a \in \text{Reg}_\Sigma} (a \cdot \pi \setminus a)$$
1. $1\setminus a = 0 \setminus a = b \setminus a = 0$ \hspace{1cm} ( $a \neq b$ )
2. $(\pi \cdot \pi')\setminus a = (\pi\setminus a \cdot \pi' + o(\pi) \cdot (\pi'\setminus a))$
3. $(\pi\setminus a) = \pi\setminus (a \cdot (\pi^*))$
4. $o(\pi^*) = 1$
5. $o(0) = o(a) = 0$
Given $w = a_0a_1 \cdots a_n \in \Sigma^*$, let $\pi\setminus w = (\pi\setminus a_0)\setminus a_1 \cdots \setminus a_n$. It is clear that $\pi\setminus w = \{v \mid aw \in L(\pi)\}^\text{rel}$. Together with the axioms of Kleene algebra [Koz91] we can syntactically derive $\pi\setminus w$ which is intuitively the remaining protocol of $\pi$ after executing a run $w$. For example:
$$(a + (b \cdot c))^\ast \setminus b = (a\setminus b \cdot (b \cdot c)\setminus b) \cdot (a + b \cdot c)^\ast = (0 + (1 \cdot c)) \cdot (a + b \cdot c)^\ast = c \cdot (a + (b \cdot c))^\ast$$
Note that in general we do not have $w \cdot (\pi\setminus w) = \pi$. We say $w$ is compliant with $\pi$ (notation: $w \approx \pi$ ) if $\pi\setminus w \neq 0$, namely, executing $w$ is allowed by the protocol $\pi$.
Intuitively, to evaluate $[\pi]w\phi$ we need to memorize the current protocol in some way. Here we employ a trick similar to the ones used in the semantics developed in [Gal02, Wan06, BE09]. We define the satisfaction relation w.r.t. a mode $\pi$ (notation: $\models_\pi$), which is used to record the current protocol. Given the current protocol $\pi$, the allowed runs in a program $\pi'$ w.r.t $\pi$ are those $w \in \Sigma^*$ such that $w \in L(\pi')$ and $w \approx \pi$. Note that if the current protocol is $\pi$, then after executing a run $w$ we have to update $\pi$ by the remaining protocol $\pi\setminus w$. Now we are ready to give the semantics as follows:
$\models_s \pi \models_\pi \phi \iff \models_s \pi \models_\pi \psi$
$\models_s \pi \models_\pi \phi$
$\models_s \pi \models_\pi \psi$
$\models_s \pi \models_\pi \phi \land \psi \iff \models_s \pi \models_\pi \phi \text{ and } \models_s \pi \models_\pi \psi$
$\models_s \pi \models_\pi [\pi']\phi \iff \forall (w, s') : w \in L(\pi'), w \approx \pi, \text{ and } s \xrightarrow{w} s' \implies \models_{s'} \pi \setminus w \phi$
$\models_s \pi \models_\pi [\pi']\phi \iff \models_s \pi \models_\pi [\pi']\psi$
where $\Sigma'$ stands for $\{a_0 + a_1 + \cdots + a_n\}$ if $\Sigma = \{a_0, a_1, \ldots, a_n\}$. The first clause says that initially everything is allowed and the last says that the newly announced protocol overrides the current one. $[\pi']\phi$ is true w.r.t the current protocol $\pi$ iff on each $s'$ that is reachable from $s$ by some run $w$ of $\pi'$ which is allowed by the current protocol $\pi$: $\phi$ holds w.r.t the remaining protocol $\pi\setminus w$. Note that it is important to remember $w$ which denotes how you get to $s'$ as the following example shows:
$[\pi]\setminus w$ is also a regular language cf. [Con71].
4.2. Basic Logic PDL
4.2.1. Example. Consider the following model $M$:
\[ s \longrightarrow \bullet \longrightarrow \bullet \longrightarrow \bullet \]
It can be verified that:
\[ M, s \models [(!((a \cdot c + b \cdot d)) \langle a + b \rangle \land \langle c \rangle \land [(!((c + d)) \langle d \rangle \top)\]
The intuition behind this example is as follows. After announcing the protocol $a \cdot c + b \cdot d$, the program $a + b$ can be executed but actually only $a$ can be executed on the model. Thus after executing $a + b$ only $c$ is possible according to the remaining protocol $(a \cdot c + b \cdot d) \models a = c$. However, if we now announce a new protocol $(c + d)$ then $d$ becomes available again.
Recall the PDL semantics in Section 2.3.1. It is not hard to see:
4.2.2. Proposition. For any test-free PDL formula $\phi$ and any pointed Kripke model $(M, s)$:
\[ M, s \models \text{PDL} \phi \iff M, s \not\models \phi \]
A natural question to ask is whether PDL$^1$ is more expressive than test-free PDL. To answer the question, we now take a closer look at the strings $w$ in the semantics of $[\pi']\phi$. Given $\pi$, let $C_{\mathcal{L}(\pi)}$ be the set of all the pre-sequences of $\pi$: $\{w \mid w \propto \pi\}$. We first show that we can partition $C_{\mathcal{L}(\pi)}$ into finitely many regular expressions.
4.2.3. Lemma. For any regular expression $\pi$ there is a minimal natural number $k$ such that $C_{\mathcal{L}(\pi)}$ can be finitely partitioned into $\pi_0, \ldots, \pi_k$ and for any $w, v \in \mathcal{L}(\pi_i)$: $\pi \propto w = \pi \propto v$.
Proof. By Kleene’s theorem 2.1.3 we can construct a deterministic finite automaton recognizing the language of $\pi$. It is well known that DFA can be minimized, thus we obtain a minimal automaton that recognizes $\mathcal{L}(\pi)$:
\[ A_\pi = ((q_0, \ldots, q_k), \Sigma, q_0, \rightarrow, F) \]
where $\{q_0, \ldots, q_k\}$ is a set of states with $q_0$ being the start state and a subset $F$ being the set of accept states. For each $i \leq k$ such that $q_i$ can reach a state in $F$: we let $\pi_i$ be the regular expression corresponding to the automaton $(\{q_0, \ldots, q_k\}, \Sigma, q_0, \rightarrow, \{q_i\})$. Since $A_\pi$ is deterministic, it is not hard to see that these $\pi_i$ form the partition that we want. $\Box$
In the sequel, we call the above unique partition $\pi_0, \ldots, \pi_k$ the pre-derivatives of $\pi$. For example, the minimal deterministic automaton$^4$ of $a' \cdot d + b \cdot (c + d)$ is:
---
$^4$We omit the transitions to the “trash” state which can not reach any accept state.
thus the pre-derivatives of \(a^* \cdot d + b \cdot (c + d)\) are \(1, a \cdot a^* , b, a^* \cdot d + b \cdot (c + d)\)
Now we define the following translation from PDL' to PDL:
\[
\begin{align*}
t(\phi) & = t_{\Sigma}(\phi) \\
t_\pi(p) & = p \\
t_\pi(\neg \phi) & = \neg t_\pi(\phi) \\
t_\pi(\phi_1 \land \phi_2) & = t_\pi(\phi_1) \land t_\pi(\phi_2) \\
t_\pi([\pi']\phi) & = \bigwedge_{i=0}^{k} (\{t_{\theta_i}\}|_{\pi_i}(\phi)) \\
t_\pi([\lnot \pi']\phi) & = \langle \pi' \rangle \top \rightarrow t_{\pi'}(\phi)
\end{align*}
\]
where \(\pi_0, \ldots, \pi_k\) are the pre-derivatives of \(\pi, \theta_i\) is a regular expression corresponding to \(L(\pi') \cap L(\pi_i)\), and \(\pi \\mid \! w\) is defined as \(\pi \\mid \! w\) for any \(w \in L(\pi_i)\).
By this translation we can show that PDL and PDL' are equally expressive.
4.2.4. Theorem. For any pointed Kripke model \(M, s:\)
\[
M, s \models \phi \iff M, s \models_{\text{PDL}} t(\phi).
\]
Proof. By induction on \(\phi\) we can show: \(M, s \models_{\pi} \phi \iff M, s \models_{\text{PDL}} t_\pi(\phi)\). The only non-trival case is for \([\pi']\phi\):
\[
\begin{align*}
& M, s \models_{\pi} [\pi']\phi \\
\iff & \forall (w, s') : w \in L(\pi), w \in L(\pi') \land s \xrightarrow{w} s' \implies M, s' \models_{\pi \mid \! w} \phi \\
\iff & \forall (w, s') : \text{if there is a pre-derivative } \pi_i : w \in L(\pi), w \in L(\pi_i), \text{ and } s \xrightarrow{w} s' \text{ then } M, s' \models_{\pi \mid \! w} \phi \\
\iff & \text{for all pre-derivatives } \pi_i : \forall s' : s \xrightarrow{w} s' \text{ and } w \in L(\pi), w \in L(\pi_i) \text{ then } M, s' \models_{\pi \mid \! w} \phi \\
\iff & M, s \models \bigwedge_{i=0}^{k} (\{t_{\theta_i}\}|_{\pi_i}(\phi))
\end{align*}
\]
Discussion. In this section, we take a rather liberal view on the “default” protocol, namely we assume that everything is allowed initially, and the announcements may only restrict the possible actions. On the other hand, we can well start with a conservative initialization where nothing is allowed unless announced later. It is not hard
\footnote{Note that \(a \cdot a^* \cdot d + b \cdot (c + d) + d = a^* \cdot d + b \cdot (c + d)\).}
to see that we can also translate this conservative version of $PDL^1$ to $PDL$ if we let $t(\phi) = t_I(\phi)$ where $I$ is the constant for the empty sequence i.e., the skip protocol. For example, $t_I([a] \perp \langle [a](a + b) \top \rangle) = [0] \perp t_I((a + b) \top) = \langle a \rangle \top$.
Moreover, $[!\pi]$ is rather radical in the sense that it changes the protocol completely. We may define a more general operation as follows: Let $\pi(x) \in R_{\Sigma,\mathcal{L}(\pi)}$, namely, $\pi(x)$ is a regular expression with a variable $x$. Now we define:
$$[!\pi(x)]\phi \equiv \mathcal{M}_s \not\models [!\pi'(x)]\phi \iff \mathcal{M}_s \models (\pi'(\pi))\top \implies \mathcal{M}_s \not\models [!\pi'(x)]\phi$$
We can then concatenate, add, insert and repeat protocols by announcing $x \cdot \pi', x + \pi'$, $\pi' + x$, and $x'$ respectively. It is easy to see that the announcement operator $[!\pi]$ introduced previously is a special case of $[!\pi(x)]$. We can still translate the logic with the generalized protocol announcements to $PDL$ with an easy revision of the translation:
$$t_\pi([!\pi'(x)]\phi) = \langle \pi'(\pi) \rangle \top \rightarrow t_{\pi'(\pi)}(\phi)$$
### 4.3 Public Event Logic $PDL^{1?}$
In this section, we extend the language of $PDL^1$ with knowledge operator and Boolean tests in programs. We shall see that by announcing a protocol with tests, we can let actions carry propositional information as we motivated in Chapter 1. The language of $PDL^{1?}$ is defined as follows:
$$\phi \equiv \top \mid p \mid \neg \phi \mid \phi \land \phi \mid [!\pi]\phi \mid K_i\phi$$
$$\pi \equiv ?\phi_b \mid a \mid \pi \cdot \pi \mid \pi + \pi \mid \pi'$$
where $i \in I$ and $\phi_b$ are Boolean formulas based on basic propositions in $P$. Note that we do not include $1$ and $0$ as atomic actions since they cannot be expressed by the Boolean tests $?\top$ and $?\bot$. We call the programs $\pi$ in $PDL^{1?}$ guarded regular expressions.
Now we can express the Håagen-Dazs slogan mentioned in Chapter 1 by the protocol: $\pi_{H-D} = ?p_{love} \cdot a_{buy}$. A suitable semantics should let $[!\pi_{H-D}][a_{buy}]K_p\text{love}$ be valid. However, without the announcement $[!\pi_{H-D}]$, the “secret” love may not be known: $[?p_{love} \cdot a_{buy}]K_p\text{love}$ should not be valid. As we mentioned in the introduction, we assume all the $a \in \Sigma$ are public events which can be observed by all the agents, while the tests, unless announced, are not observable to the agents.
To prepare ourselves for the definition of the semantics, we first interpret regular expressions with Boolean tests as the languages of guarded strings [Koz01]. A guarded string over $P$ and $\Sigma$ is a sequence $\rho_1 \alpha_1 \rho_2 \alpha_2 \rho_3 \ldots \rho_\mu \alpha_\mu \rho_{\mu+1}$ where $\alpha_i \in \Sigma$ and $\rho_i \subseteq P$ represents the valuations of basic propositions in $P$ ($p \in \rho$ iff $p$ is true according to $\rho$). For any Boolean formula $\psi$, let $X_\psi \subseteq 2^P$ be the corresponding set of valuations, represented by subsets of $P$, that make $\psi$ true. For any $\rho \subseteq P$, let $\phi_\rho$ be the formula $\phi_\rho = \bigwedge_{p \in \rho} p \land \bigwedge_{p'(P - \rho)} \neg p$.
Now we can define the language of guarded strings associated with a guarded regular expression over $\Sigma$ and $P$:
\[
L_g(a) = \{ \rho a' | \rho, \rho' \subseteq P \}
\]
\[
L_g(\varepsilon) = \{ \rho | \rho \in X_v \}
\]
\[
L_g(\pi \cdot \pi') = [w \circ v | w \in L_g(\pi), v \in L_g(\pi')]
\]
\[
L_g(\pi + \pi') = L_g(\pi) \cup L_g(\pi')
\]
\[
L_g(\pi^*) = \{ \epsilon \} \cup \bigcup_{n>0}(L_g(\pi^n))
\]
where $\circ$ is the fusion product: $w \circ v = w' \rho' v$ when $w = w' \rho$ and $v = \rho v'$; $\pi^n = \pi \cdot \pi \cdot \cdots \pi$.
We write $\pi_1 \equiv_\pi \pi_2$ if $L_g(\pi_1) = L_g(\pi_2)$.
4.3.1. Example. We have:
\[
?p \cdot ?q \cdot a \equiv_\pi ?(p \land q) \cdot a \equiv_\pi ?(p \land p) \cdot ?q \cdot a
\]
\[
?(p \land q) \cdot a + ?(p \land \neg q) \cdot a \equiv_\pi ?p \cdot a \text{ and } ?p \cdot a \not\equiv_\pi ?p \cdot a
\]
We now define the language of input derivative $\pi \backslash w$ for a guarded string $w$ as:
\[
L_g(\pi \backslash w) = \{ v | w \circ v \in L_g(\pi) \}
\]
and we say $w \backslash \pi$ if $L_g(\pi \backslash w) \neq \emptyset$. As in the previous section, we let $C^\pi_{\varepsilon} = \{ w | w \backslash \pi \}$.
Let $L(w)$ be the sequence of public events $a_0 \ldots a_k$ that occurs in $w$, e.g., $L(?p \cdot a \cdot b) = L(?a \cdot ?p \cdot b) = a \cdot b$. Recall that we assume that only the public events can be observed. Thus a guarded string $w$ is indistinguishable from another guarded string $v$ if $L(w) = L(v)$.
According to the standard semantics of PAL, the effect of announcing a formula $\phi$ is to restrict the model to the $\phi$-worlds (see Section 2.3.3). Our public events are like announcements but with preconditions given by the previously announced protocols. However, to model the public events we can also keep the model intact but remember the information induced by the public events. When evaluating epistemic formulas, we let agents only consider possible those worlds which are consistent with the previously recorded information. Since the tests are Boolean, this restriction on accessible worlds works the same as the restriction on models in standard PAL. This motivates us to use $e^\pi_{\varepsilon}$ in the semantics of PDL!?b where $\phi$ is to record the information given by public events according to the protocols. We interpret PDL!?b on the S5 models $(S, P, I, \sim, V)$ as follows:
4.3. Public Event Logic PDL$_{w}$
\[
M, s \models \phi \iff M, s \vdash_{\Sigma} \phi \\
M, s \models \psi(p) \iff p \in V(s) \\
M, s \models \psi(\neg\phi) \iff \neg M, s \models \psi(\phi) \\
M, s \models \psi(\phi \land \phi') \iff M, s \models \psi(\phi) \land M, s \models \psi(\phi') \\
M, s \models \psi(\Delta) \iff \emptyset \models \psi \quad \text{for all } v, \text{if } s \sim t \text{ and } M, t \models \psi \text{ then } M, t \models \psi \\
M, s \models \psi(\pi[\psi] \phi) \iff \forall w : w \in L_{g}(\pi), w \propto_{s} \pi_{r} \text{ and } s[w]s \models \psi \iff M, s \models \psi_{\pi(w)}(\phi) \\
M, s \models \psi(\phi') \iff M, s \models \phi' \\
\]
where:
\[
s[w]s \iff w = p_{0}a_{1}p_{2} \cdots p_{n}p \text{ and } V(s) = p \]
and
\[
\phi_{\pi}^{w} = \bigvee \{ \phi_{v} | v = p_{0}a_{1}p_{2} \cdots p_{n}p, L(w) = L(v), v \propto_{s} \pi \}\]
Note that we do not include the transitions labelled by $a \in \Sigma$ in the models since we assume that each public event is executable at each state unless it is not compliant with the current protocol (e.g., you can talk about anything in public unless constrained by some law or conventions). Since the public events are intended to be announcement-like events, we also assume that executing a protocol of such event does not result in changing the real state from one to another. This explains the uniformity of $\rho$ and $s$ in the definition of $[w]$. Now we explain the ideas behind $\phi_{\pi}^{w}$ as follows. First given a $w = p_{0}a_{1}p_{2} \cdots p_{n}p$, we collect all the sequences $v = p'_{0}a'_{1}p'_{2} \cdots p'_{n}p'$ such that $v \propto_{s} \pi$. Intuitively $p'$ represents the information carried by $v$ according to the protocol $\pi$. Since each such $v$ is indistinguishable from $w$ for all the agents, the disjunction $\phi_{\pi}^{w}$ is then the information which can be derived from the observation of the public events in $w$ according to the protocol $\pi$.
Consider the Haagen-Dazs example, let $M$ be a two-world model representing that a girl $i$ does not know whether a boy loves her or not (she is not sure between a love-world $s$ and a $\neg$love-world $t$). Let $\pi = \{p_{\text{love}} \cdot a_{\text{buy}}, w_{0} = \{p_{\text{love}} \cdot a_{\text{buy}} \cdot p_{\text{love}}\}$. It is clear that $w_{0}$ is the only guarded string in $L_{g}(\pi)$ that have an uniform $\rho$. Note that $L(w_{0}) = L(\emptyset a_{\text{buy}} \emptyset)$, thus $\phi_{\pi}^{w_{0}} = p \lor \neg p$. We now show $M, s \not\models [\pi]Kp_{\text{love}}$:
\[
M, s \models [\pi]Kp_{\text{love}} \\
\iff M, s \models \psi_{\pi}^{w_{0}}Kp_{\text{love}} \\
\iff \text{for all } w \in L_{g}(\pi), w \propto_{s} \Sigma_{r} \text{ and } s[w]s \models M, s \models \psi_{\pi}^{w_{0}}(\pi) \iff M, s \models \psi_{\Sigma \setminus w_{0}}Kp_{\text{love}} \\
\iff s[w_{0}]s \models M, s \models \psi_{\pi}^{w_{0}}Kp_{\text{love}} \\
\iff M, s \models \psi_{\pi}^{w_{0}}Kp_{\text{love}} \\
\iff M, s \models \psi_{\pi}^{w_{0}}Kp_{\text{love}}
\]
Since $s \sim t$ and $M, t \models p \lor \neg p$ then $M, s \not\models [\pi]Kp_{\text{love}}$. On the other hand:
We will follow a similar strategy as in the previous section to finitely partition \( G \) function is defined based on the acceptance of normal strings and the following transformation \( \Sigma \) by atomic actions in \( P \) set of atomic tests (or simply guarded strings) over label set \( \Sigma \) accepts a finite string \( w \in L \) for some string \( w \in L \). Therefore \( M, s \models [\pi] [\pi] K p_{love} \).
Similarly, for the we-will-see scenario mentioned in the introduction, if \( M \) is a two-world model representing that a Westerner i does know whether \( p_{no} \) (state s) or \( \neg p_{no} \) (state t) then we can show that:
\[
M, s \models [! (\top \cdot a_{will-see}) ] [ (\neg p_{no} \cdot a_{will-see}) ] K p_{no} \land [! (\neg p_{no} \cdot a_{will-see}) ] K p_{no}
\]
where \( ? \top \cdot a_{will-see} \) is the default protocol a Westerner may have as the standard interpretation for the sentence “we will see” which does not carry any useful information.
In the rest of this section we will show that PDL\[^{7}\] can be translated back to PDL as well. We will follow a similar strategy as in the previous section to finitely partition \( C^\pi_\pi \). This time we need to use automata on guarded strings. Given \( P \) let \( B(P) \) be the set \( 2^P \). Intuitively, \( X \in B(P) \) represent Boolean formulas over \( P \).
4.3.2. Definition. (Automata on guarded strings [Koz01]) A finite automaton on guarded strings (or simply guarded automaton) over a finite set of actions \( \Sigma \) and a finite set of atomic tests \( P \) is a tuple \( A = (Q, \Sigma, P, q_0, \rightarrow, F) \) where the transitions are labelled by atomic actions in \( \Sigma \) (action transitions) and sets \( X \in B(P) \) (test transitions). \( A \) accepts a finite string \( w \) over \( \Sigma \cup B(P) \) (notation: \( w \in L_{\Sigma \cup B(P)}(A) \)), if it accepts \( w \) as a standard finite automaton over label set \( \Sigma \cup B(P) \). The acceptance for guarded strings is defined based on the acceptance of normal strings and the following transformation function \( G \) which takes a string over \( \Sigma \cup B(P) \) and outputs a set of guarded strings.
\[
G(a) = \{ \rho a \rho' | \rho, \rho' \subseteq P \}
\]
\[
G(X) = \{ \rho | \rho \in X \}
\]
\[
G(wv') = \{ v \rho v' | v \rho v' \in G(w) \text{ and } \rho v' \in G(v') \}
\]
We say \( A \) accepts a finite guarded string \( v : \rho_0 a_0 \rho_1 \ldots a_{k-1} \rho_k \) over \( \Sigma \) and \( P \), if \( v \in G(w) \) for some string \( w \in L_{\Sigma \cup B(P)}(A) \). Let \( L_{\phi}(A) \) be the language of guarded strings accepted by \( A \).
We say a guarded automaton is deterministic if the following hold (cf. [Koz01]):
- Each state is either a state that only has outgoing action transitions (action state) or a state that only has outgoing test transitions (test state).
- The outgoing action transitions are deterministic: for each action state \( q \) and each \( a \in \Sigma \), \( q \) has one and only one \( a \)-successor.
- The outgoing test transitions are deterministic: they are labelled by \( \{ \rho | \rho \subseteq P \} \) and for each test state \( q \) and each \( \rho \), \( q \) has one and only one \( \{ \rho \} \)-successor. Clearly these tests \( \rho \) at a test state are logically pairwise exclusive and altogether exhaustive (viewing \( \rho \) as the Boolean formula \( \phi_\rho \)).
• The start state is a test state and all accept states are action states.
• Each cycle contains at least one action transition.
The Kleene theorem between guarded automata and guarded regular expressions is proved in [Koz01].
### 4.3. Public Event Logic
#### 4.3.3. Theorem. [Koz01, Theorem 3.1, 3.4] For each guarded regular expression \( \pi \) over \( \mathbf{P} \) and \( \Sigma \) there is a (deterministic) guarded automaton \( A \) over \( \mathbf{P} \) and \( \Sigma \) such that \( L_{\pi}(A) = L_{\pi}(A) \), and vice versa.
Let \( L_{U} \) be the language of \( \rho \)-uniform guarded strings: \( \{ \rho a_{1} \rho a_{2} \cdots \rho a_{k} \rho \mid \rho \subseteq \mathbf{P}, a_{i} \in \Sigma \} \). Clearly there is a regular expression for this language: \( \sum_{n \in \mathbb{N}}((? \phi_{1} \cdot (a_{1} + \cdots + a_{m})) \cdots ? \phi_{n}) \) if \( \Sigma = \{ a_{1}, \ldots, a_{m} \} \). Let \( U_{\rho}^{*} = C_{\rho}^{\Sigma} \cap L_{U} \) be the \( \rho \)-uniform part of \( C_{\rho}^{\Sigma} \). Following the idea in the previous section, we first need to prove the following lemma:
#### 4.3.4. Lemma. Given a guarded regular expression \( \pi \) over \( \Sigma \) and \( \mathbf{P} \), we can finitely partition \( U_{\rho}^{*} \) into \( \pi_{0}, \ldots, \pi_{n} \) such that for any \( i \leq k : w, v \in L_{\pi}(\pi_{i}) \implies \pi_{i} \backslash w = \pi_{i} \backslash v \) and \( \phi^{\pi}_{\pi_{i}} = \phi^{\pi}_{\pi_{i}} \).
**Proof** (Sketch) The strategy for the proof is as follows: we first partition \( U_{\rho}^{*} \) into \( \pi_{0}, \ldots, \pi_{n} \) such that for any \( i \leq n \), for any \( w, v \in L_{\pi}(\pi_{i}) : \phi^{\pi}_{\pi_{i}} = \phi^{\pi}_{\pi_{i}} \), then we further partition each \( \pi_{i} \) according to the shared derivatives like in Lemma 4.2.2.
From Theorem 4.3.3 we can build deterministic guarded automata \( A_{n} \) and \( A_{U} \) such that \( L_{\pi}(A_{n}) = L_{\pi}(\pi) \) and \( L_{\pi}(A_{U}) = L_{U} \). From the definition of deterministic guarded automata, we can assume that in such deterministic automata test states can only have action states as successors, for otherwise the successor test states can be pruned[6]. Now set all the action states in \( A_{n} \) that can reach some accept states as the new accept states, we can obtain a guarded automaton \( A_{C} \) such that \( L_{\pi}(A_{C}) = C_{\pi}^{\Sigma} \). Finally we can build an automaton \( \tilde{A} \) such that \( L_{\pi}(\tilde{A}) = C_{\pi}^{\Sigma} \cap L_{0} \) by the usual automata product of \( A_{C} \) and \( A_{U} \).
It is not hard to see that if you start with a \( \rho \) transition in \( A \) then you can never go through a \( \rho \) transition which leads to an accept state such that \( \rho \neq \rho' \). Thus the automaton is in the following shape if all the states that are not leading to any accept states are pruned:

[6]Since all the test transitions are labelled by \( \{ \rho \} \) for some \( \rho \subseteq \mathbf{P} \), two consecutive tests are either identical or logically exclusive.
We can then separate $\rho_i$ “zones” from each other by taking each $s_i$ as the start state for zone $\rho_i$. Let $B_s$ be the standard finite automata over action set $\Sigma : (Q_{act}, \Sigma, s_i, \rightarrow_F)$ where $Q_{act}$ is the set of action states in $Q$, $F$ is the set of accept states of $A$ that are also in $Q_{act}$, and $q \xrightarrow{a} q' \iff q \xrightarrow{a} \phi \in A$. Given $Z \subseteq \{\rho_0, \ldots, \rho_k\}$ (intuitively a Boolean formula), let $D_Z$ be the product automaton $\prod_{\rho \in Z} B_{s}$ where $B_{s}$ is the complement automaton of $B_{s}$. We can show that $D_Z$ recognizes all the sequences $w \in \Sigma^*$ such that $(\rho | v = L(v))$ for some $v = \rho a_1 \rho \cdots \rho a_k$. Thus without much effort, we can turn $D_Z$ into a finite guarded automaton which recognizes guarded strings $v$ in $C^g$, such that:
$$\phi^g = \bigvee \{\phi : v' = \rho a_1 \rho a_2 \rho \cdots \rho a_k, \phi \in L(v') \}
$$
Thus $C^g$ can be partitioned into finitely many regular expressions $\pi_1$ such that for any $w, v \in L_\Sigma(\pi_1)$: $\phi^g = \phi^g$. By the similar techniques as in the previous section, we can further partition each of these regular expressions $\pi_1$ into finitely many regular expressions $\pi_0 \cdots \pi_m$ such that for any $w, v \in \pi_{ij}$: $\pi \\vdash w = \pi \\vdash v$. Thus we can partition $C^g$ into $\pi_0 \cdots \pi_m$ w.r.t $\pi_0 \cdots \pi_m$ and $\pi_0 \cdots \pi_m$ such that for any $i \leq k, j \leq m$:
$w, v \in L_\Sigma(\pi_{ij}) \Rightarrow \pi \\vdash w = \pi \\vdash v = \pi_{ij}$ and $\phi^g = \phi^g = \phi_{ij}$.
Now we define the following translation from $PDL^n$ to its fragment without $[\![\pi]\!]$ which is a $PDL_{\Sigma,A}$ language with Boolean tests:
$$
\begin{align*}
t(\phi) &= t(\phi) \\
t^T(\phi) &= t(\phi) \\
t^T(\phi) &= t^T(\phi) \\
t^T(\phi_1 \land \phi_2) &= t^T(\phi_1) \land t^T(\phi_2) \\
t^T(\phi_1 \rightarrow \phi_2) &= t^T(\phi_1) \rightarrow t^T(\phi_2) \\
t^T(K_\theta \phi) &= [\!(t^T(\phi) \rightarrow t^T(\phi))\!]
\end{align*}
$$
where $\pi_0, \ldots, \pi_m$ form a partition of $U^{\Sigma}_{\Sigma}$ satisfying the requirements stated by the above lemma. $\theta$ is a regular expression corresponding to $L_\Sigma(\pi') \cap L_\Sigma(\pi_i)$, and $\pi \land \pi_i$ is $\pi \land w$ for any $w \in L(\pi_i)$ and $\phi_i$ is $\phi^g_i$ for any $w \in L(\pi_i)$.
Note that the translated formulas still have two kinds of modalities: $[\![\pi]\!]$ and $[\!\!]$. We now argue that we can further eliminate the program modalities. Since we assumed that every event $a \in \Sigma$ is executable at any state when we disregard the protocol constraint, then we can actually replace each $a \in \Sigma$ with $?a \mapsto \tau$ in the translated formula. Now the program modalities appearing in the translated formula are action-free. Without much effort, we can convert a regular expression of tests into a single test e.g., $(?a + ?b) \mapsto \chi$ is equivalent to $?(a \lor b) ?a \mapsto \tau \lor \chi$ Finally we can eliminate such
\footnote{Note that if $\psi$ is Boolean then $t^T(\psi) = \psi$, e.g., in $t^T(K_{\phi})$.}
test modalities by the validity: \( [?\phi] \psi \leftrightarrow (\phi \rightarrow \psi) \). Let \( t'(\phi) \) be the formula which is obtained from \( t(\phi) \) by further eliminating program modalities as described above, then we can translate PDL\( ^{?} \) to basic epistemic logic (EL\( ^{1} \)) (thus also to PDL\( ^{!} \)).
**4.3.5. Theorem.** For any pointed \( S5 \) Kripke model \( M = (S, P, I, \sim, V, s) \):
\[
M, s \vDash _{PDL^{?}} \phi \iff M, s \vDash _{EL} t'(\phi).
\]
As an example, consider the formula \( \phi = [!(?(p \cdot a) + b)[a]K_i p] \):
\[
t(\phi) = t'(\phi) = \langle(?(p \cdot a) + b)\top \rightarrow t'_{(?(p \cdot a) + b)}[a]K_i p \rangle
= \langle(?(p \cdot a) + b)\top \rightarrow [?(p \cdot a)]K_i p \rangle
= \langle(?(p \cdot a) + b)\top \rightarrow [?(p \cdot a)[i](p \rightarrow p) \rangle
Therefore by replacing \( a \) and \( b \) with \(?\top\) we have:
\[
\langle(?(p \cdot ?\top) + ?\top)\top \rightarrow [(?(p \cdot ?\top)[i](p \rightarrow p))\rangle
\]
It is easy to see that the above formula is logically equivalent to \( \langle ?(p)\top \rightarrow [(?(p)[i]\top) \rangle\) which is equivalent to \( \top \). Indeed, \([!(?(p \cdot a) + b)[a]K_i p] \) is a valid formula.
### 4.4 Update Logic PDL\( ^{?} \)
PDL\( ^{!} \) and PDL\( ^{!?} \) presented in the previous sections are limited in their convenience for modelling epistemic protocols due to the following issues:
- The restriction to Boolean tests excludes the possibility of handling protocols with more complicated pre-conditions, e.g., \( ?K_i p \cdot a: \) if you know \( p \) then do \( a \).
- The protocols are interpreted as languages of strings, thus we cannot handle branching structures which are useful when considering branching protocols e.g., strategies in games.
- PDL\( ^{!?} \) does not allow complicated epistemic actions as in DEL \cite{BM04}, but only public events.
- The changes of protocols are assumed to be public and agents do not have initial uncertainties about protocols.
As we have seen in the previous sections, operations on finite automata are crucial in proving various results. A natural idea is to use automata directly as modalities in the language. Inspired by \cite{KvB04}, in this section we generalize the notion of event models in DEL and introduce a version of PDL with product modalities taking automata as arguments such that the above issues can be handled.
To encode initial uncertainties of protocols we first need to enrich Kripke models with protocol information. Following the notion in process algebra, we use Kripke models with (successful) termination:
4.4.1. Definition. **(Kripke Model with Termination)** A Kripke model with termination (KMT) is a tuple $M = (S, P, Σ, →, V, F)$ where $(S, P, Σ, →, V)$ is a standard Kripke model and $F \subseteq S$ is a set of terminating states. We also write $s ↓$ for $s \in F$. A pointed KMT is a KMT with a designated state in it.
Intuitively, the protocol encoded at a state in a KMT can be “read off” by viewing the KMT as an automaton with the designated state as the start state and terminating states as the accept states. A classic Kripke model $M = (S, P, Σ, →, V)$ can be viewed as a KMT with the universal termination: $M^ι = (S, P, Σ, →, V, S)$. The uncertainties of the initial protocols can be modelled by epistemic relations among those states where different protocols are encoded. Bisimulation on KMTs can be defined in a straightforward way:
4.4.2. Definition. **(Bisimulation on KMT (≡))** A binary relation $R$ between the domain of two KMTs $(S, P, Σ, →, V, F)$ and $(S', P, Σ, →', V', F')$ is called bisimulation iff $(s, s') \in R$ implies that the following conditions hold:
- $s \in F \iff s' \in F'$;
- $p \in V(s) \iff p \in V(s')$;
- if $s \xrightarrow{t} t'$ then there exist $t'$ such that $s' \xrightarrow{t} t' \text{ and } tRt'$;
- if $s' \xrightarrow{t'} t'$ then there exist $t'$ such that $s \xrightarrow{t} t \text{ and } tRt'$.
In this section we build our PDL-style language by using finite automata in two ways: first as program modalities which are alternative representations of modalities with regular expressions with tests as in PDL (cf., e.g., [HK100]); and second, as update models, the generalized counterpart of the protocol announcements in PDL and PDL\textsuperscript{b}. The formulas of our update logic PDL\textsuperscript{b} are built from $P$ and $Σ$ as follows:
$$\phi ::= T \mid \bot \mid p \mid \neg \phi \mid \phi \land \phi \mid [A] \phi \mid [\square A] \phi$$
where $p \in P$, $\bot$ is a constant for successful termination, and each $A = (Q, Φ, Σ, \rightarrow, G, φ_0)$ is an automaton over actions in $Σ$ and tests in a finite set $Φ$ of PDL\textsuperscript{b} formulas.\[9\]
Intuitively, $[A] \phi$ says “after any execution of the program encoded by $A$, $φ$ holds”. $[\square A] \phi$ expresses “after updating the current protocol with the one encoded by $A$, $φ$ holds”. To simplify the notation, we sometimes use $[πA]$ to denote the automaton modality corresponding to the regular expression with test $π$.
The semantics for the crucial formulas is given as follows:\[9\]
\[\begin{align*}
M, s \models \downarrow & \iff s \in F \\
M, s \models [A] \phi & \iff \text{for all } s' : s \triangleright \llbracket w \rrbracket s' \text{ and } w \in Λ(A) \implies M, s' \models \phi \\
M, s \models [\square A] \phi & \iff (M, s) \triangleright \llbracket A \rrbracket s \models \phi
\end{align*}\]
\[9\]The formulas in $Φ$ should be constructed at the earlier stages of the mutual induction on PDL\textsuperscript{b} formulas $φ$ and automata $A$.
\[9\]According to the semantics, $[A] \phi$ is an unconditional update, thus $[\square A] \phi \iff [A] \phi$ is valid.
4.4. Update Logic PDL
where \( s \parallel w \parallel s \) is defined as on page 14 for the standard PDL, and the operation \( \boxtimes \) is defined as:
**4.4.3. Definition. (Update Product \( \boxtimes \))** Given a KMT \( M = (S, P, \Sigma, \rightarrow, V, F) \) and a guarded automaton \( A = (Q, \Phi, \Sigma, q_0, \rightarrow, G) \), the product model is a KMT:
\[
(M \boxtimes A) = (S', P, \Sigma, \rightarrow', V', F')
\]
where:
\[
S' = S \times Q \\
\rightarrow' = \{(s, q, (s', q')) \mid s \xrightarrow{a} s', q \xrightarrow{\phi} q', \text{ and } M, s \models \Box \phi\} \\
V'((s, q)) = V(s)
\]
and \( \Box \phi \) is a possibly empty sequence of tests in \( \Phi \). We let \( \bigwedge \Box \phi \) be the conjunction of the formulas in \( \Box \phi \) and let it be \( \top \) if \( \phi \) is empty. For pointed models: \( (M, s_0) \boxtimes A \) is defined as \( M, s_0, \Box \phi \) where
**4.4.4. Example.** We only name a few important states e.g., \( s_0 \) in \( M \) where \( p \) holds. In the product model \( M \boxtimes A \) below, we only show the generated submodel w.r.t. \( (s_0, q_0) \):
\[
\begin{align*}
M: & \quad s_0 \xrightarrow{a} s_1 \xrightarrow{\cdot} \downarrow \\
A: & \quad q_0 \xrightarrow{\cdot} q_1 \xrightarrow{\cdot} \downarrow \\
M \boxtimes A: & \quad (s_0, q_0) \xrightarrow{a} (s_1, q_1) \xrightarrow{\cdot} \downarrow \\
& \quad (s_1, q_2) \xrightarrow{\cdot} \downarrow
\end{align*}
\]
In \( M \) after executing \( a \) we can have a choice of \( b \) and \( c \) (both can lead to the successful termination). \( A \) encodes the protocol: if \( p \) then do \( a \cdot b \) and if \( a \cdot b \) is possible then do \( a \cdot c \). Updating \( A \) on \( M \) we obtain \( M \boxtimes A \) where the choice of \( b \) and \( c \) after executing \( a \) is no longer possible, instead we need to make the choice of \( a \cdot b \) and \( a \cdot c \) at the beginning. According to the semantics: \( M, s_0 \models [a^A]((b^A) \downarrow \land (c^A) \downarrow) \land [\Box A][a^A](\neg((b^A) \downarrow \land (c^A) \downarrow)) \cup \)
As observed in [KvB04], we can view a classic event model of [BMS98] as a automaton where each state with outgoing transitions is guarded by a unique test (the precondition of the state in the event model).
On the other hand, our guarded automata based updates give us more freedom in modelling protocols compared to the event models. Consider the following simple update model denoting the protocol “if \( p \) then you do \( a \) and if \( q \) then you do \( b \)”. It cannot be mimicked by any single-pointed event model. Instead the update can be simulated by a multi-pointed event model combining three single-pointed event models.
with mutually exclusive preconditions at the designated worlds (if we disregard the termination information):
\[
A: \quad q_0 \rightarrow_p \bullet \rightarrow \downarrow
\]
\[
e_0: p \land q \rightarrow \bullet \rightarrow \top
\]
\[
e_1: p \land \neg q \rightarrow \bullet \rightarrow \top
\]
\[
e_2: \neg p \land q
\]
where \((e : \phi)\) denotes that the precondition of \(e\) is \(\phi\).
In the sequel, given an automaton \(A\), we let \(A_q\) be the automaton as \(A\) but with start state \(q\) and let \(A_q^\square\) be the automaton as \(A\) but with \(q\) as the only accept state.
\[\text{[KV04]}\] shows that PDL with event model update is equally as expressive as PDL itself by defining a translation pushing the product operators to the inner part of the formulas in order to eliminate them in the end. We will show PDL\(^\square\) can be translated back to PDL as well by following the same idea. In particular, for the formula in the shape of \([A][B] \phi\), we need to translate it into some formula in the shape of \([A^\prime][B^\prime] \psi\). Namely we need to mimic the program \(B\) after the update \(\square A\) by some program before the update. For this purpose, we first define a new product between automata \(A\) and \(A^\prime\) to handle the interaction between modalities \([A]\) and \([\square A]\).
#### 4.4.5. Definition. (Sequential Product \(\rtimes\))
Given two automata \(A = (Q, \Phi, \Sigma, \rightarrow, q_0, F)\) and \(A^\prime = (Q', \Phi', \Sigma, \rightarrow', q'_0, F')\) the sequential product \(A \rtimes A^\prime\) is again an automaton: \((Q^\times, \Phi \cup \Phi', \Sigma, \rightarrow^\times, (q_0, q'_0), F^\times)\) where:
\[
\Phi'' = \{(A_q)\psi | \psi \in \Phi', q \in Q\}
\]
\[
Q^\times = Q \times Q'
\]
\[
\rightarrow^\times = \{(q_1, q'_1) \rightarrow (q_2, q'_2) | q_1 \rightarrow q_2 \text{ and } q'_1 \rightarrow' q'_2\}
\]
\[
\phi^\times = \{(q_1, q'_1), (q_2, q'_2) | (\phi = (\square A_q)\psi, q_1 = q_2 \text{ and } q'_1 \rightarrow' q'_2) \text{ or } (q_1 \rightarrow q_2 \text{ and } q'_1 = q'_2)\}
\]
\[
F^\times = \{(q, q') | q' \in F'\}
\]
4.4. Update Logic $PDL^\text{u}$
Here is an example of a sequential product:
$$
A : \quad q_0 \xrightarrow{\phi} q_1 \xrightarrow{\psi} q_3 \downarrow \\
\downarrow \quad \downarrow \quad \downarrow \\
\bullet \quad q_2 \downarrow \quad q^*_2 \downarrow \\
A \triangleright A' : \quad (q_0, q'_0) \xrightarrow{(\mathfrak{A}A_{q_0})\psi} (q_0, q'_1) \xrightarrow{\phi} (q_1, q'_1) \xrightarrow{\psi} (q_3, q'_3) \downarrow \\
\downarrow \quad \downarrow \quad \downarrow \\
(\chi, q'_0) \xrightarrow{(\mathfrak{A}A_{q_0})\chi} (\chi, q'_2) \downarrow
$$
Let $PDL^1$ be the $[\mathfrak{A}A]$-free fragment of $PDL^\text{u}$. We can then define a translation $t$ from the language of $PDL^\text{u}$ to the language of $PDL^1$ by pushing $\mathfrak{A}A$ through other modalities (cf. [KvB04]):
$$
t(\top) = \top \\
t(\bot) = \bot \\
t(p) = p \\
t(\neg \phi) = \neg t(\phi) \\
t(\phi_1 \land \phi_2) = t(\phi_1) \land t(\phi_2) \\
t([A]\phi) = [t(A)]t(\phi) \\
t([A]T) = \top \\
t([A]\bot) = \bot \land t(\chi_A) \\
t([A]p) = p \\
t([\mathfrak{A}A]^{\neg} \phi) = \neg t([\mathfrak{A}A]\phi) \\
t([\mathfrak{A}A](\phi_1 \land \phi_2)) = t([\mathfrak{A}A]\phi_1) \land t([\mathfrak{A}A]\phi_2) \\
t([\mathfrak{A}A][B]\phi) = \land_{Q \in Q} t([A^Q \rightarrow B][sA_s]_\phi) \\
t([\mathfrak{A}A][\mathfrak{B}]\phi) = t([\mathfrak{A}A][t([\mathfrak{B}]\phi)])
$$
where $t(A)$ is the automaton where each test label $\psi$ in $A$ is replaced by $t(\psi)$, and $\chi_A = \bigvee \{ \bigwedge \phi \mid \phi \in L(A) \}$. Intuitively, $\chi_A$ is the “termination test” in $A$: the disjunction of the combined tests which can lead to an accept state without going through any action transitions. Note that $\chi_A$ can not be an infinite disjunction essentially since we assume the set of test labels is finite and thus modulo logical equivalence there are only finitely many $\bigwedge \phi$. $\chi_A$ can be computed as follows: we revise $A$ by only keeping the accept states that are reachable from the start state in $A$ via test transitions only; then we can turn this new finite automaton into a regular expression of tests; finally we turn this regular expression into a formula as we mentioned in the end of the previous section. By proving the faithfulness of the translation we can show:
4.4.6. Theorem. $PDL^\text{u}$ over KMT is equally expressive as $PDL^1$ over KMT.
Proof (sketch) The non-trivial case is to check that $(\mathfrak{A}A)(B)\phi \leftrightarrow \bigvee_{Q \in Q}(A^Q \rightarrow B)(\mathfrak{A}A)_\phi \phi$ is valid.
\(\Rightarrow\): Suppose \(M, s_0 \models \langle (A^q) \bowtie B \rangle \phi \) then \((M, s_0) \boxdot A \equiv \langle B \rangle \phi \). Thus there is a path \(v \in L(B)\) such that \((s_0, q_0)\[v\](s, q)\) for some \((s, q)\) in \((M, s_0) \boxdot A\) and \(M \boxdot A, (s, q) \models \phi\) (equivalently \(M, s \models \langle (A^q) \phi \rangle \)). In order to show \(M, s_0 \models \langle A^q \bowtie B \rangle \langle (A^q) \phi \rangle\), we need to find a \(w \in L(A^q \bowtie B)\) such that \(s_0\[w\]\). The spirit of the proof is to match sequences as follows:
positions: \(0 \cdots k \quad k + 1 \quad k + 2 \quad \cdots n\)
**In \(B\):**
\[
q'_0 \xrightarrow{\psi} q'_k \xrightarrow{a} q'_{k+1} \xrightarrow{a} q'_{k+2} \cdots \xrightarrow{a} q'_n \]
**In \(M \boxdot A\):**
\[
(s_0, q_0) \to (s_k, q_k) \xrightarrow{a} (s_{k+2}, q_{k+2}) \cdots (s, q) \models \phi
\]
**In \(A^q \bowtie B\):**
\[
(q_0, q'_0) \to (q_k, q'_k) \xrightarrow{\langle A^q \rangle \psi} (q_{k+1}, q'_{k+1}) \xrightarrow{a} (q_{k+2}, q'_{k+2}) \cdots (q, q'_n) \]
**In \(M\):**
\[
s_0 \xrightarrow{(A^q) \psi} s_k \xrightarrow{(A^q) \psi \neg} s_k \xrightarrow{a} s_{k+2} \cdots \models \langle (A^q) \phi \rangle
\]
where \(\neg\) represents a sequence of transitions labelled by the sequence \(\neg\).
\(\Leftarrow\): Suppose there is a \(q\) in \(A\) such that \(M, s_0 \models \langle A^q \bowtie B \rangle \langle (A^q) \phi \rangle\). Then there is a \(w \in L(A^q \bowtie B)\) such that there is an \(s\) in \(M\) such that \(s\[w\]\) and \(M, s \models \langle (A^q) \phi \rangle\) (equivalently \(M \boxdot A, (s, q) \equiv \phi\)). To prove \(\langle (A^q) \bowtie B \rangle \phi\), we only need to find some \(v \in L(B)\) such that \((s_0, q_0)\[v\](s, q)\) in \(M \boxdot A\). We demonstrate the idea of the proof as follows:
4.5 Conclusion and Future Work
Protocols are important components of social software \cite{Par02} that govern the human behaviour in social interactions. In this chapter we studied the dynamics of protocols. We proposed three PDL-style logics for reasoning about protocol changes: PDL$^!$ handles protocol changes in the context without knowledge; PDL$^{!?}$ extends PDL$^!$ with knowledge operators and Boolean tests so it can deal with the situations where events carry information according to the protocols; PDL$^\triangleright$ extends the DEL framework with more general product update operations taking guarded automata as update models, which allows us to model branching protocols involving complicated tests. We showed that these three logics can be translated to PDL. What we gain is the explicitness of the language and convenience in modelling scenarios with protocol changes as we demonstrated by various examples. For interested readers who want to see more applications of the protocol changing operations, we refer to \cite{WSvE10}.
where we integrated the protocol changing operator as in PDL\(^1\) in a specific setting of communications over channels. It is shown in [Lut06] that the public announcement logic, though equally expressive as epistemic logic, is exponentially more succinct than the pure epistemic logic in expressing certain properties on unrestricted models\(^10\). Here we conjecture that similar results apply to our logics as well. However, we leave the succinctness and complexity analysis for future work.
One thing we did not cover in this chapter is the higher-order change of protocols. For example, *I am asking you to ask her to do something* can be viewed as an announcement of a protocol concerning another protocol announcement. In the logics we presented in this chapter we did not consider protocol updates as basic events, thus excluding protocol announcements such as \( !(!\pi \cdot \pi') \). The exact semantics for such announcements can be complicated, and is left for future work\(^11\).
Last but not least, we may introduce more operations on automata other than \( \boxdot \), e.g., continuation or replacement similarly to the generalized protocol announcements mentioned in the end of Section 4.2. It is interesting to see whether PDL is still closed under such extra operations and how the new operator can help to define the existing dynamic operators e.g., the various belief revision operators and preference upgrades as in [BS08a, vBL07, Liu08].
\(^10\)not on S5 models as desired though
\(^11\)The propositional coding technique which deals with higher-order event models in [Auc09] may be useful.
|
{"Source-Url": "https://pure.uva.nl/ws/files/1021630/80253_08.pdf", "len_cl100k_base": 16327, "olmocr-version": "0.1.49", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 100009, "total-output-tokens": 18185, "length": "2e13", "weborganizer": {"__label__adult": 0.00048613548278808594, "__label__art_design": 0.000942707061767578, "__label__crime_law": 0.0005364418029785156, "__label__education_jobs": 0.0029544830322265625, "__label__entertainment": 0.0002720355987548828, "__label__fashion_beauty": 0.000247955322265625, "__label__finance_business": 0.0006399154663085938, "__label__food_dining": 0.0007243156433105469, "__label__games": 0.0021038055419921875, "__label__hardware": 0.0014009475708007812, "__label__health": 0.0006494522094726562, "__label__history": 0.0006098747253417969, "__label__home_hobbies": 0.0002694129943847656, "__label__industrial": 0.0009641647338867188, "__label__literature": 0.0020732879638671875, "__label__politics": 0.0005006790161132812, "__label__religion": 0.0008702278137207031, "__label__science_tech": 0.357421875, "__label__social_life": 0.00025534629821777344, "__label__software": 0.013885498046875, "__label__software_dev": 0.6103515625, "__label__sports_fitness": 0.0003349781036376953, "__label__transportation": 0.0012836456298828125, "__label__travel": 0.0002696514129638672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51946, 0.0108]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51946, 0.47525]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51946, 0.8049]], "google_gemma-3-12b-it_contains_pii": [[0, 1026, false], [1026, 3439, null], [3439, 6721, null], [6721, 7269, null], [7269, 10799, null], [10799, 13427, null], [13427, 15630, null], [15630, 18913, null], [18913, 21332, null], [21332, 24493, null], [24493, 27943, null], [27943, 31057, null], [31057, 34235, null], [34235, 36868, null], [36868, 40002, null], [40002, 42750, null], [42750, 44879, null], [44879, 47425, null], [47425, 49276, null], [49276, 50327, null], [50327, 51946, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1026, true], [1026, 3439, null], [3439, 6721, null], [6721, 7269, null], [7269, 10799, null], [10799, 13427, null], [13427, 15630, null], [15630, 18913, null], [18913, 21332, null], [21332, 24493, null], [24493, 27943, null], [27943, 31057, null], [31057, 34235, null], [34235, 36868, null], [36868, 40002, null], [40002, 42750, null], [42750, 44879, null], [44879, 47425, null], [47425, 49276, null], [49276, 50327, null], [50327, 51946, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51946, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51946, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51946, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51946, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51946, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51946, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51946, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51946, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51946, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51946, null]], "pdf_page_numbers": [[0, 1026, 1], [1026, 3439, 2], [3439, 6721, 3], [6721, 7269, 4], [7269, 10799, 5], [10799, 13427, 6], [13427, 15630, 7], [15630, 18913, 8], [18913, 21332, 9], [21332, 24493, 10], [24493, 27943, 11], [27943, 31057, 12], [31057, 34235, 13], [34235, 36868, 14], [36868, 40002, 15], [40002, 42750, 16], [42750, 44879, 17], [44879, 47425, 18], [47425, 49276, 19], [49276, 50327, 20], [50327, 51946, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51946, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
8738e7f56cb8fb5009acc01f008b2d9257d6684c
|
[REMOVED]
|
{"Source-Url": "https://kar.kent.ac.uk/47469/1/local_143542.pdf", "len_cl100k_base": 14503, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 65570, "total-output-tokens": 16889, "length": "2e13", "weborganizer": {"__label__adult": 0.0003762245178222656, "__label__art_design": 0.000286102294921875, "__label__crime_law": 0.0002675056457519531, "__label__education_jobs": 0.0004343986511230469, "__label__entertainment": 5.066394805908203e-05, "__label__fashion_beauty": 0.00014674663543701172, "__label__finance_business": 0.0001442432403564453, "__label__food_dining": 0.0003767013549804687, "__label__games": 0.0004570484161376953, "__label__hardware": 0.0005698204040527344, "__label__health": 0.0003864765167236328, "__label__history": 0.00019502639770507812, "__label__home_hobbies": 6.848573684692383e-05, "__label__industrial": 0.00030350685119628906, "__label__literature": 0.0002357959747314453, "__label__politics": 0.0002448558807373047, "__label__religion": 0.00046324729919433594, "__label__science_tech": 0.004489898681640625, "__label__social_life": 7.593631744384766e-05, "__label__software": 0.002689361572265625, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.0003044605255126953, "__label__transportation": 0.0004565715789794922, "__label__travel": 0.00020122528076171875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57106, 0.01074]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57106, 0.27741]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57106, 0.82168]], "google_gemma-3-12b-it_contains_pii": [[0, 1288, false], [1288, 3837, null], [3837, 6467, null], [6467, 8851, null], [8851, 11991, null], [11991, 15435, null], [15435, 18770, null], [18770, 21105, null], [21105, 24770, null], [24770, 28141, null], [28141, 33122, null], [33122, 36786, null], [36786, 39726, null], [39726, 42645, null], [42645, 45617, null], [45617, 48607, null], [48607, 51374, null], [51374, 54149, null], [54149, 57106, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1288, true], [1288, 3837, null], [3837, 6467, null], [6467, 8851, null], [8851, 11991, null], [11991, 15435, null], [15435, 18770, null], [18770, 21105, null], [21105, 24770, null], [24770, 28141, null], [28141, 33122, null], [33122, 36786, null], [36786, 39726, null], [39726, 42645, null], [42645, 45617, null], [45617, 48607, null], [48607, 51374, null], [51374, 54149, null], [54149, 57106, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57106, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57106, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57106, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57106, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57106, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57106, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57106, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57106, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57106, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57106, null]], "pdf_page_numbers": [[0, 1288, 1], [1288, 3837, 2], [3837, 6467, 3], [6467, 8851, 4], [8851, 11991, 5], [11991, 15435, 6], [15435, 18770, 7], [18770, 21105, 8], [21105, 24770, 9], [24770, 28141, 10], [28141, 33122, 11], [33122, 36786, 12], [36786, 39726, 13], [39726, 42645, 14], [42645, 45617, 15], [45617, 48607, 16], [48607, 51374, 17], [51374, 54149, 18], [54149, 57106, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57106, 0.02727]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
801b516291e71057a0a7391791238ffb6abda56c
|
Primers or Reminders?
The Effects of Existing Review Comments on Code Review
Spadini, Davide; Calikli, Gul; Bacchelli, Alberto
DOI
10.1145/3377811.3380385
Publication date
2020
Document Version
Final published version
Published in
Proceedings of the 42nd International Conference on Software Engineering (ICSE ‘20)
Citation (APA)
Important note
To cite this publication, please use the final published version (if applicable). Please check the document version above.
Primers or Reminders?
The Effects of Existing Review Comments on Code Review
Davide Spadini
d.spadini@sig.eu
Software Improvement Group &
Delft University of Technology
Amsterdam & Delft, The Netherlands
Gül Çalikli
gul.calikli@gu.se
Chalmers & University of Gothenburg
Gothenburg, Sweden
Alberto Bacchelli
bacchelli@ifi.uzh.ch
University of Zurich
Zurich, Switzerland
ABSTRACT
In contemporary code review, the comments put by reviewers on a specific code change are immediately visible to the other reviewers involved. Could this visibility prime new reviewers’ attention (due to the human’s proneness to availability bias), thus biasing the code review outcome? In this study, we investigate this topic by conducting a controlled experiment with 85 developers who perform a code review and a psychological experiment. With the psychological experiment, we find that ≈70% of participants are prone to availability bias. However, when it comes to the code review, our experiment results show that participants are primed only when the existing code review comment is about a type of bug that is not normally considered; when this comment is visible, participants are more likely to find another occurrence of this type of bug. Moreover, this priming effect does not influence reviewers’ likelihood of detecting other types of bugs. Our findings suggest that the current code review practice is effective because existing review comments about bugs in code changes are not negative primers, rather positive reminders for bugs that would otherwise be overlooked during code review. Data and materials: https://doi.org/10.5281/zenodo.3653856
CCS CONCEPTS
• Software and its engineering → Software verification and validation.
KEYWORDS
Code Review, Availability Heuristic, Priming
ACM Reference Format:
1 INTRODUCTION
Peer code review is a well-established practice that aims at maintaining and promoting source code quality, as well as sustaining development teams by means of improved knowledge transfer, awareness, and solutions to problems [3, 5, 27, 41].
In the code review type that is most common nowadays [7], the author of a code change sends the change for review to peer developers (also knowns as reviewers), before the change can be integrated in production. Previous research on three popular open-source software projects has found that three to five reviewers are involved in each review [44]. Using a software review tool, the reviewers and the author conduct an asynchronous online discussion to collectively judge whether the proposed code change is of sufficiently high quality and adheres to the guidelines of the project. In widespread code review tools, reviewers’ comments are immediately visible as they are written by their authors; could this visibility bias the other reviewers’ judgment?
If we consider the peer review setting for scientific articles, reviewers normally judge (at least initially) the merit of the submitted work independently from each other. The rationale behind such preference is to mitigate group members’ influences on each other that might lead to errors in the individual judgments [34]. It is reasonable to think that also in code review, the visibility of existing review comments made by other developers may affect one’s individual judgment, leading to an erroneous judgment.
An existing comment may prime new reviewers on a specific type of bug, due to the availability bias [30]. Availability bias is the tendency to be influenced by information that can be easily retrieved from memory (i.e., easy to recall) [21]. This bias is one of the many cognitive biases identified in psychology, sociology, and management research [30]. Cognitive biases are systematic deviations from optimal reasoning [30, 47, 48]. In the cognitive psychology literature, Kahneman and Tversky showed that humans are prone to availability bias [51]. For example one may avoid traveling by plane after having seen recent plane accidents on the news, or may see conspiracies everywhere as a result of watching too many spy movies [21]. Therefore, it seems fitting to imagine that a reviewer may be biased toward a certain bug type, by readily seeing another reviewer’s comment on such a bug type. This bias would likely result in a distorted code review outcome.
In this paper, we present a controlled experiment we devised and conducted to test the current code review setup and reviewers’ proneness to availability bias. More specifically, we examine whether priming a reviewer on a bug type (achieved by showing an existing review comment) biases the outcome of code review.
Our experiment was completed by 85 developers, 73% of which reported to have at least three years of professional development experience. We required each developer to conduct a code review in which an existing comment was either shown (treatment group) or
Based on the availability bias literature, we expected the primed participants (treatment group) to be more likely to find the bug of the same type (as it is already available in memory), but less likely to find the other bug type (since distracted by the comment). Surprisingly, instead, our results show that—for three out of four bugs—the code review outcome does not change between the treatment and control groups. After testing our results for robustness, we could find no evidence indicating that, for these three bugs, the outcome of the review is biased in the presence of an existing review comment priming them on a bug type. Only for one bug type, though, we have strong evidence that the behavior of the reviewers changed: When the previous review comment was about a type of bug that is normally not considered during developers’ coding/review practices (i.e., checking for NullPointerException on a method’s parameters), the reviewers were more likely to find the same type of bug with a strong effect.
Overall, we interpret the results of our experiment as an indication that existing review comments do not act as negative primers, rather as positive reminders. As such, our experiment provides evidence that the current collaborative code review practice, adopted by most software projects, could be more beneficial than separate individual reviews, not only in terms of efficiency and social advantages, but also in terms of its effectiveness in finding bugs.
## 2 BACKGROUND AND RELATED WORK
In this section, we review the literature on human aspects in contemporary code review practices, as well as studies on scientific peer review. Subsequently, we provide background on cognitive biases in general and present relevant studies in Software Engineering (SE). We also provide a separate subsection on availability bias, which consists of some theoretical background and existing research on availability bias in SE.
### 2.1 Human aspects in modern code review
Past research has provided evidence that human factors determine code review performance to a significant degree and that code review is a collaborative process [3]. Empirical studies conducted at companies such as Google [41] and Microsoft [3] revealed that, besides finding defects and ensuring maintainability, motivations for reviewing code are knowledge transfer (e.g., education of junior developers) and improving shared code ownership, which is closely related to team awareness and transparency.
Besides being a collaborative activity, code review is also demanding from a cognitive point of view for the individual reviewer. A large amount of research is focused on improving code review tools and processes based on the assumption that reducing reviewers’ cognitive load improves their code review performance [7, 50]. For instance, Baum et al. [9] argue that the reviewer and review tool can be regarded as a joint cognitive system, also emphasizing the importance of off-loading cognitive process from the reviewer to the tool. Ebert et al. [16] conducted a study to understand the factors that confuse code reviewers through manual analysis of 800 comments from code review of the Android project, and later they built a series of automatic classifiers (e.g., Multinomial Naive Bayes, OneR) for identification of confusion in review comments. Baum et al. [8] conducted experiments to examine the association of working memory capacity and cognitive load with code review performance. They found that working memory capacity is associated with the effectiveness of finding de-localized defects. However, authors could not find substantial evidence on the influence of change part ordering on mental load or review performance. Spadini et al. [46] designed and conducted a controlled experiment to investigate whether examining changed text code before the changed production code (also known as Test Driven Code Review or TDR) affects code review effectiveness. According to the findings of Spadini et al., developers adopting TDR find the same amount of defects in production code, but more defects in test code and fewer maintainability issues in the production code.
Significantly related to the work we present in this paper is the recent empirical observational study by Thongtanunam and Hassen [49]. They investigated the relationship between the evaluation decision of a reviewer and the visible information about a patch under review (e.g., comments and votes by prior co-reviewers) [49]. With an observational study on tens of thousands of patches from two popular open-source software systems, Thongtanunam and Hassen found that (1) the amount of feedback and co-working frequency between reviewer and patch author are highly associated with the likelihood of the reviewer providing a positive vote and that (2) the proportion of reviewers who provided a vote consistent with prior reviewers is significantly associated with the defect-proneness of a patch (even though other factors are stronger). These results corroborate the hypothesis that there is some sort of influence generated by the visible information about the change under review on the behavior of the reviewers [49]. In the work we present in this paper, we setup a controlled setting to investigate an angle of this influence further, hoping to shed more light on the causal connection between comments’ visibility and reviewers’ effectiveness.
### 2.2 Scientific peer review
Peer review is the main form of group decision making used to allocate scientific research grants and select manuscripts for publication. Many studies demonstrated that individual psychological processes are subject to social influences [15]. Such finding also points out some issues that might arise during group decision making. Experimental results obtained by Deutsch and Gerard [15] show that when a group situation is created, normative social influences grossly increase, leading to errors in individual judgment. Based on the findings of this study, it is emphasized that group consensus succeeds only if groups encourage their members to express their own, independent judgments. Therefore, one of the procedures for peer review of scientific research grant applications is ‘written individual review’ [34]. With this review procedure, reviewers judge the merit of a grant application in written form, independently of one another, before the final decision maker approves or rejects an application. Written individual review can mitigate the influence of reviewers on the way to reach a collective judgment. It is also used in scientific venues to eliminate biases. There is also another form of review procedure, namely panel peer review where a common
judgment is reached through mutual social exchange [34]. In panel peer review, a group of reviewers convene to jointly deliberate and judge the merit of an application before the funding decision is made. However, as also emphasized by Deutsch and Gerard [15], it is crucial to encourage individual members to express their own judgment without feeling under the pressure of normative social influences for proper functioning of group decision making.
2.3 Cognitive biases in software engineering
Cognitive biases are defined as systematic deviations from optimal reasoning [30, 47, 48]. In the past six decades, hundreds of empirical studies have been conducted showing the existence of various cognitive biases in humans’ thought processes [21, 48]. Although many theories explain why cognitive biases exist, Baron [6] stated that there is no evidence so far about the existence of a single reason or generative mechanism that can explain the existence of all cognitive bias types. Some theories see cognitive bias as the by-product of cognitive heuristics that humans developed due to their cognitive limitations (e.g., information processing power) and time pressure, whereas some relate them to emotions.
Human cognition is a crucial part of software engineering research since software is developed by people for people. In their systematic mapping study [30], Mohanini et al. report 37 different cognitive biases that have been investigated by software engineering studies so far. According to the results of this systematic mapping study, the cognitive biases that are most common in software engineering studies are anchoring bias, confirmation bias, and overconfidence bias. Anchoring bias results from forming initial estimates about a problem under uncertainty and focusing on these initial estimates without making sufficient modifications in the light of more recently acquired information [21, 47]. Anchoring bias has so far been studied in software engineering research within the scope of requirements elicitation [37], pair programming [19], software reuse [35], software project management [2], and effort estimation [25]. Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that affirms one’s prior beliefs or hypotheses [38]. The manifestations of confirmation bias during unit testing and how it affects software defect density have been widely studied in software engineering literature [11, 12, 24].
Any positive effect of experience on mitigation of confirmation bias has not been discovered so far [10]. However, in some studies, participants who have been trained in logical reasoning and hypothesis testing skills were manifested less tendency towards confirmatory behavior during software testing [10]. Ko and Myers identify confirmation bias among the cognitive biases that cause errors in programming systems [23]. Van Vliet and Tang indicate that during software architecture design, some organizations assign devil’s advocate so that one’s proposal is not followed without any questioning [52]. Overconfidence bias manifests when a person’s subjective confidence in their judgement is reliably greater than the objective accuracy of such a judgement [31]. This bias type has been studied within the context of pair programming [19], requirements elicitation [13] and project cost estimation [26].
Availability bias. Availability bias is the tendency to be influenced by information that can be easily retrieved from memory (i.e., easy to recall) [21]. The definition of availability bias was first formulated by Tversky and Kahneman [51], who conducted a series of experiments to explore this judgemental bias. However, including these original experiments, many psychology experiments do not go beyond comparing two groups (i.e., controlled and test group) to differ in availability. To the best of our knowledge, in cognitive psychology literature, the only experiment providing evidence for the mediating process that manifests availability bias was devised by Gabrelcik and Fazio, who employed (memory) priming as the mediating process [18].
Availability bias has also been studied in SE research. De Graaf et al. [14] examined software professionals’ strategies to search for documentation by using think-aloud protocols. Authors claim that using incorrect or incomplete set of keywords, or ignoring certain locations while looking for documents due to availability bias might lead to huge losses. Mohan and Jain [29] claim that while performing changes in design artifacts, developers—due to availability bias—might focus on their past experiences, since such info can be easily retrieved from developers’ memory. However, such information might be inconsistent with the current state of the software system. Mohan et al. [29] propose traceability among design artifacts as a solution to mitigate the negative effects of the availability bias and other cognitive biases (i.e., anchoring and confirmation bias). Robins and Redmiles [39] propose a software architecture design environment reporting that it supports designers by addressing their cognitive challenges, including availability bias. Jørgensen and Sjøberg [20] argue that while learning from software development experience, learning from the right experiences might be hindered due to availability bias. Authors suggest retaining post-mortem project reviews to mitigate negative effects of availability bias.
Overall, existing literature points to the potential risks associated with availability bias in SE. As our community has provided evidence that code review is a collaborative and cognitively demanding process and that the collaborative nature of code review also has the potential to affect individual reviewers’ cognition, availability bias could manifest itself during the code review process. This bias could hamper code review effectiveness. In our study, we aim to explore how existing review comments bias the code review outcome.
3 EXPERIMENTAL DESIGN
In this section, we explain the design of our experiment.
3.1 Research Questions and Hypotheses
The paper is structured along two research questions. By answering these research questions, we aim to understand to what extent contemporary code review is robust to reviewers’ availability bias, depending on the nature of the bug for which a previous comment exists on the code change. Our first research question and the corresponding hypotheses follow.
**RQ1. What is the effect of priming the reviewer with a bug type that is not normally considered?**
We hypothesize that an existing review comment about a bug type that reviewers do not usually consider (such as a null value passed as an argument [4, 7, 40, 42]) might prime the reviewers towards this bug type, so they find more of these bugs. Also, we hypothesize that—due to such priming—reviewers overlook bugs on which they were not primed. Hence, our formal hypotheses are:
\( H_{010} \): Priming subjects with bugs they usually do not consider does not affect their performance in finding bugs of the same type.
\( H_{011} \): Priming subjects with bugs they usually do not consider does not affect their performance in finding bugs they usually look for.
We also explore how priming on a bug that is usually considered during code reviews affects review performance. Therefore, our second research question is:
\[ \text{RQ}_2, \text{What is the effect of priming the reviewer with a bug type that is normally looked for?} \]
We hypothesize that also in the case of an existing review comment about a bug type that reviewers usually consider primes the reviewers towards this bug type, so that they find more of these bugs. Also, we expect primed reviewers to only look for the type of bugs on which they are primed, overlooking others. Hence, our formal hypotheses are:
\( H_{020} \): Priming subjects with bugs they usually consider does not affect their performance in finding bugs of the same type.
\( H_{021} \): Priming subjects with bugs they usually consider does not affect their performance in finding bugs they usually do not look for.
3.2 Experiment Design and Structure
To conduct the code review experiment and to assess participants’ proneness to availability bias, we extend the browser-based tool CRExperiment [43]. The tool allows us to (i) visualize and perform a code review, (ii) collect data through questions asking for subjects’ demographics information as well as data consisting of participants’ interactions with the tool, (iii) collect data to measure subjects’ proneness to availability bias, by using a memory priming set-up to trigger subjects’ use of availability heuristic that is followed by a survey. Both the priming set-up and the survey are inherited from a classic experiment in cognitive psychology literature that was designed by Gabrielič and Fazio [18].
Code Review Experiment Overview. For the code review experiment, we follow independent measures design [22] augmented with some additional phases. The following stages in the browser-based tool correspond to the code review experiment:
(1) Welcome Page: The welcome page provides participants with information about the experiment. This page also aims to avoid demand characteristics [33], which are cues and hints that can make the participants aware of the goals of this research study leading to change in their behaviour during the experiment. For this purpose, we do not inform the participants about the full purpose of the experiment, rather they are only told that the experiment aims to compare code review performance under different circumstances. Before starting the experiment, the subjects are also asked for their informed consent.
(2) Participants’ Demographics: On the next page, subjects are asked questions to collect demographic information as well as confounding factors, such as: (i) gender, (ii) age, (iii) proficiency in the English language, (iv) highest obtained education degree, (v) main role, (vi) years of experience in software development, (vii) current frequency in software development, (viii) years of experience in Java programming, (ix) years of experience in doing code reviews, (x) current frequency of doing code reviews, and (xi) the number of hours subjects worked that day. It is kept mandatory that subjects answer these questions before proceeding to the next page where they will receive more information about the code review experiment they are about to take part in. We ask these questions to measure subjects’ real, relevant, and recent experience. Collecting such data helps us to identify which portion of the developer population is represented by subjects who take part in our experiment [17].
(3) Actual Experiment: Each participant is then asked to perform a code review and is randomly assigned to one of the following two treatments:
• \( \text{Pr (primed)} \) – The subject is given a code change to review where there exists a review comment (made by a previous reviewer) about a bug in the code. The test group of our experiment comprises the subjects who are assigned to this treatment.
• \( \text{NPr (not–primed)} \) – The subject is given a code change to review. In the code change, there are no comments made by any other reviewers. The control group of our experiment comprises the subjects who are assigned to this treatment.
More specifically, the patch to review contains three bugs: two of the same type (\( \text{Bug}_A \)) and one of a different type (\( \text{Bug}_B \)). In the \( \text{Pr} \) group, the review starts with a comment made by another reviewer showing that one instance of \( \text{Bug}_A \) is present. The participant is then asked to continue the review. In the \( \text{NPr} \) group, the review starts without comments. The comments shown to the participants in the \( \text{Pr} \) group were written by the authors, and the wording was refined with the feedback from the pilots (Section 3.5). Each participant is asked to take the task very seriously. More specifically, we ask them to find as many defects as possible and, like in real life, spend as little time as possible on the review. However, unlike in real life, we ask them not to pay attention to maintainability or design issues, but only in correctness issues ("bugs"). For example, we discard comments regarding variable namings or small refactorings.
(4) Interruptions during the Experiment: Immediately after completing the code review, the participants are asked whether they were interrupted during the task and for how long.
(5) Follow-up Questions: In the last page of the code review experiment, the participants are shown the code change they just reviewed together with the bugs disclosed: For each bug, we show it and explain why it is a defect and in what cases
Instructions
We are now going to show you the code changes to review. The old version of the code is on the left, the new version is on the right.
For the scientific validity of this experiment, it is vital that the review task is taken very seriously.
- Like in real life, you should find as many defects as possible and you should spend as little time as possible on the review.
- Unlike in real life, we are not interested in maintainability or design issues, but only in correctness issues ("bugs").
For example, a remark like the following is beyond the goal of the review: "Create a new class which is implemented by runnable interface that we can access multiple times." Instead, what we are interested in are the defects that make the code not work as intended under all circumstances.
Please ensure that the code compiles and that the tests pass.
You will see that a previous reviewer already put a comment in line 23. You are now asked to continue with your review.
To add a review remark, click on the corresponding line number. To delete a review mark, click on it again and delete the remark’s text.
**Figure 1: Example of a code review using the tool.**
```
public class ExerciseSumArray {
public static void main(String[] args) {
int[] array = {1, 2, 3, 4, 5};
int sum = 0;
for (int i = 0; i < array.length; i++) {
sum += array[i];
}
System.out.println("The sum is: "+sum);
}
}
```
```
public class ExerciseSumArray {
public static void main(String[] args) {
int[] array = {1, 2, 3, 4, 5};
int sum = 0;
for (int i = 0; i < array.length; i++) {
sum += array[i];
}
System.out.println("The sum is: "+sum);
}
}
```
```
// Given 2 Lists representing numbers e.g., [1,4, 2, 9], [9, 9] = 99;
// calculate the sum of 2 Lists, and return the result in on List.
// For example:
// [1, 0, 8] = [6, 4, 0]
// [4, 7] = [8, 6, 7]
// 3
```
it might fail. Then, for each bug, we ask the participants to indicate whether they captured it in the review:
- If the participants found the bug and they belonged to the Pr group, we ask them to what extent the comment of the previous reviewer influenced the discovery of the bug (using a 5-point Likert scale).
- If the participants did not find the bug (independently whether they were in the Pr or NPr group), we ask them to elaborate on why they think they missed the bug.
**Assessment of Proneness to Availability Bias.** The code review experiment is followed by a set-up that primes participants’ memory to trigger availability bias. This set-up serves as a mediating process to manipulate availability bias so that we can measure the extent to which each subject is prone to this type of cognitive bias.
To measure this phenomenon, we inherited the test part of the controlled experiment of Gabrielcik and Fazio [18]. In the original experiment, the difference in the results of control and test groups showed that (memory) priming triggered the participants’ availability bias; and (iii) survey in the original experiment makes it possible to quantitatively assess participants’ proneness to availability bias. Therefore, the remaining stages in the browser-based tool comprise the following:
(1) **Welcome Page:** We provide a second welcome page in which, to avoid demand characteristics [33], the participants are told that they are about to participate in an experiment that aims to explore software engineers’ attention by testing a set of visual stimuli, instead of the actual goal.
(2) **Warm-up Session:** We proceed with a warm-up session in which participants are asked to focus on a series of 20 words flashing once each on the screen. The words are randomly selected from the English dictionary, and none of them contain the letter ‘T’. Each word flashes for 300 ms. At the end of the warm-up, we ask the participants to write three words they have seen and recall, and to make a guess if they do not remember them.
(3) **Actual Psychology Experiment:** After the warm-up, we proceed with the actual psychology experiment: this time, we show two series of 20 words, all of them including the cognition mechanism (i.e., memory priming) that triggers availability bias is explicitly devised; (ii) memory priming mechanism is also employed in code review experiment to trigger participants’ availability bias; and (iii) survey in the original experiment makes it possible to quantitatively assess participants’ proneness to availability bias.
letter ‘T’. This time words flash at a faster rate, i.e., 150ms, to avoid that the participants consciously recognize that the words have the letter ‘T’ so often, which would bias their last task [18]. After each series, we ask the participant to write three words they have seen and recall, and to make a guess if they do not remember them.
(4) Measuring Proneness to Availability Bias: The last task of the participants is to answer 15 questions, which ask to compare the frequency words for a given pair of letters in the English dictionary. For example, given the question “Do more words contain T or S”, participants responded on a 9-point scale, with one end labeled “Many more contain T” and the other “Many more contain S”. Our main goal is to measure the extent to which each subject is prone to availability bias. Hence, in 5 of the 15 questions we ask whether in the English dictionary there are more words containing the letter ‘T’ or another random letter. As in the experiment of Gabrieličk and Fazio [18], we expect the participants to indicate that there are more words containing the letter ‘T’ (even though this is not the case) since they were primed in step 3. The other 10 questions are used to prevent the participants from understanding the actual aim of the study.
3.3 Objects
The objects of the study are represented by the code changes (or patch, for brevity) to review, and the bugs that we selected and injected, which must be discovered by the participants.
Patches. To avoid giving some developers an advantage, the two patches are not selected from open-source software projects, hence they are not known to any of the participants. To maintain the difficulty of the code review reasonable (after all, developers are used to review only the codebase on which they work every day), we screen many websites that offer Java exercises searching for exercises that are: (1) neither too trivial nor too complicated (based on our experience teaching programming to students), (2) self-contained, and (3) do not rely on special technologies or frameworks/libraries.
After several brainstorming sessions among the authors, only two exercises satisfied these goals and were selected.
Defects. Code review is a well-established and widely adopted practice aimed at maintaining and promoting software quality [3, 41]. There are different reasons on why developers adopt this practice, but one of the main ones is to detect defects [3]. Hence, in our experiment we manually seed bugs (functional defects) in the code. More specifically, we seed two different types of bugs: one that could cause a NullPointerException (BugA), and one that could cause the return of a wrong value (BugB).
The bugs were injected in the code as follows:
- In Patch1, we inject two BugA and one BugB (the priming is done on BugA).
- In Patch2, we inject two BugB and one BugA (the priming is done on BugB).
The NullPointerException (BugA) in the first change was on the passed parameters. As reported by white [7] and gray literature [4, 40, 42], developers are not used to check for this kind of errors in code review, because they expect the caller to make sure the parameters are not null; hence, we use it as the not normally considered bug that we investigate in RQ1. Instead, BugA in the second change (RQ2) does not regard a parameter, to make sure that it is bug type that normally developers look for in a review.
3.4 Variables and Measurement Details
We aim to investigate whether participants that are primed on a specific type of bug are more likely to capture only that type of bug. To understand whether the subjects did find the bug (i.e., the value for our dependent variables), we proceed with the following steps: (1) the first author of this paper manually analyzes all the remarks added by the participants (each remark is classified as identifying a bug or being outside of the study’s scope), then (2) the authors cross-validate the results with the answer given by the participants (as explained in Section 3.2, after the experiment the participants had to indicate whether they captured the bugs).
In Table 1, we represent all the variables of our model. The main independent variable of our experiment is the treatment (Pr or NPr). We consider the other variables as control variables, which also include the time spent on the review, the participant’s role, years of experience in Java and Code Review, and tiredness. Finally, we run a logistic regression model similar to the one used by McIntosh et al. [28] and Spadini et al. [46]. To ensure that the selected logistic regression model is appropriate for the available data, we first (1) compute the Variance Inflation Factors (VIF) as a standard test for multicollinearity, finding all the values to be below 3 (values should be below 10), thus indicating little or no multicollinearity among the independent variables, (2) run a multilevel regression model to check whether there is a significant variance among reviewers, but we found little to none, thus indicating that a single level regression model is appropriate, and, finally, (4) when building the model we added the independent variables step-by-step and found that the coefficients remained stable, thus further indicating little to no interference among the variables. For convenience, we include the script to our publicly available replication package [45].
Availability bias score. We calculate availability bias scores as in the original experiment by Gabrieličk and Fazio [18]. The frequency comparisons on the 9-point scale were scored by assignments of a value between +4 and −4. Positive numbers were assigned for ratings indicating that letter ‘T’ was contained in more words than the other letter, while negative numbers were assigned in favor of the other letter. We calculated the availability bias score for each participant as the average (and also median) of values for the 5 relevant questions.
3.5 Pilot Runs
As the first version of the experiment was ready, we started conducting pilot runs to (1) verify the absence of technical errors in the online platform, (2) check the ratio with which participants were able to find the injected bugs (regardless of their treatment group), (3) tune the experiment on the proneness to availability bias (in terms of flashing speed and number of words to ask), (4) verify the understandability of the instructions as well as the user interface, and (5) gather qualitative feedback from the participants. We conducted three different pilot runs, for a total of 20 developers. The participants were recruited through the professional network of the study authors to ensure that they would take the task seriously and
provide feedback on their experience. No data gathered from the 20 participants to the pilot was considered in the final experiment.
After each pilot run, we inspected the results and the qualitative feedback we received and discussed extensively among the authors to verify whether parts of the experiment should have been changed. After the third run, the required changes were minimal, and we considered the experiment ready for its main run.
3.6 Recruiting Participants
The experiment was spread out through practitioners blogs and web forums (e.g., Reddit) and through direct contacts from the professional network of the study authors, as well as the authors’ social media accounts on Twitter and Facebook. We did not reveal the aim of the experiment. To provide a small incentive to participate, we introduced a donation-based incentive of five USD to a charity.
4 THREATS TO VALIDITY
Construct Validity. Threats to construct validity concern our research instruments. To measure the extent to which subjects are prone to availability bias, we used the memory priming mechanism and the survey that was employed in an experiment designed and conducted by Gabrieli and Fazio, in cognitive psychology literature [18]. Data obtained from the controlled experiment that Gabrieli and Fazio conducted provide direct evidence that memory priming can be a mediating process to trigger availability bias. The remaining constructs we use are defined in previous publications, and we reuse the existing instruments as much as possible. For instance, the tool employed for the online experiment is based on similar tools used in earlier works [9, 46].
To avoid problems with experimental materials, we employed a multi-stage process: After tests among the authors, we conducted three experiments with ≈7 subjects each time (for a total of 20 pilots) with external participants. After each pilot session, we made corrections to the experiment based on the feedback from the subjects of the pilot, materials were checked by the authors one more time before we launched the actual experiment.
Regarding defects and code changes, the first author prepared the code changes and corresponding test codes as well as injecting the defects into these code changes. These were later checked by the other authors. Code change and corresponding test code were on the same page, and subjects had to scroll down to proceed to the next page of the online experiment. In this way, we aimed to ensure that subjects saw the test code. Test code were added to make the experiment closer to a real world scenario.
A major threat is that the artificial experiment created by us could differ from a real-world scenario. We mitigated this issue by (1) re-creating as close as possible a real code change (for example, submitting test code and documentation together with the production code), and (2) using an interface that is identical to the common Code Review tool Gerrit [1] (both our tool and Gerrit use Mergely [36] to show the diff, also using the same color scheme).
Internal Validity. Threats to internal validity concern factors that might affect the cause and effect relationship that is investigated through the experiment. Due to the online nature of the experiment, we cannot ensure that our subjects conducted the experiments with the same set-up (e.g., noise level and web searches), however we argue that developers in real world settings also have a multi-fold of tools and environments. Moreover, to mitigate the possible threat posed by missing control over subjects, we included some questions to characterize our sample (e.g., experience, role, and education).
To prevent duplicate participation, we adjusted the settings of the online experiment platform so that each subject can take the experiment only once. To exclude participants who did not take the experiment seriously, we screened each review and we did not consider experiments without any comments in the review, that took less than five minutes to be completed, or that were not completed at all.
Furthermore, several background factors (e.g., age, gender, experience, education) may have impact on the results. Hence, we collected all such information and investigated how these factors affect the results by conducting statistical tests.
External Validity. Threats to external validity concerns the generalizability of results. To have a diverse sample of subjects (representative of the overall population of software developers who
<table>
<thead>
<tr>
<th>Metric</th>
<th>Description</th>
<th>Control Variables</th>
</tr>
</thead>
<tbody>
<tr>
<td>Treatment</td>
<td>Type of the treatment (Pr or NPr)</td>
<td></td>
</tr>
<tr>
<td>Gender</td>
<td>Gender of the participant</td>
<td></td>
</tr>
<tr>
<td>Age</td>
<td>Age of the participant</td>
<td></td>
</tr>
<tr>
<td>LevelOfEducation</td>
<td>Highest achieved level of education</td>
<td></td>
</tr>
<tr>
<td>Role</td>
<td>Role of the participant</td>
<td></td>
</tr>
<tr>
<td>ProfDevExp</td>
<td>Years of experience as professional developer</td>
<td></td>
</tr>
<tr>
<td>JavaExp</td>
<td>Years of experience in Java</td>
<td></td>
</tr>
<tr>
<td>ProgramPractice</td>
<td>How often they program</td>
<td></td>
</tr>
<tr>
<td>ReviewPractice</td>
<td>How often they perform code review</td>
<td></td>
</tr>
<tr>
<td>ReviewExp</td>
<td>Years of experience in code review</td>
<td></td>
</tr>
<tr>
<td>WorkedHours</td>
<td>Hours the participant worked before performing the experiment</td>
<td></td>
</tr>
<tr>
<td>Tired</td>
<td>How tired was the participant at the moment of taking the experiment</td>
<td></td>
</tr>
<tr>
<td>Stressed</td>
<td>How stressed was the participant at the moment of taking the experiment</td>
<td></td>
</tr>
<tr>
<td>Intermittentions</td>
<td>For how long the participant was interrupted during the experiment</td>
<td></td>
</tr>
<tr>
<td>TotalDuration</td>
<td>Total duration of the experiment</td>
<td></td>
</tr>
<tr>
<td>PsychoPrimed</td>
<td>Whether the participant was primed in the psychology experiment</td>
<td></td>
</tr>
</tbody>
</table>
(†) see Figure 2 for the scale.
Table 1: Variables used in the statistical model.
employ contemporary code review, we invited developers from several countries, organizations, education levels, and background.
![Bar chart showing distribution of participants based on role.
5 RESULTS
In this section, we report the results of our investigation on whether and how having a comment from a previous reviewer influences the outcome of code review.
5.1 Validating The Participants
A total of 243 people accessed our experiment environment following the provided link. From these participants, we exclude all the instances in which the code change is skipped or skimmed, by demanding either at least one entered remark or more than five minutes spent on the review. After applying the exclusion criteria, a total of 85 participants are selected for the subsequent analyses.
Figure 2 presents the descriptive statistics on what the participants reported in terms of their role, experience, and practice. The majority of the participants are programmer (67%) and reported to have many years of experience in professional software development (73% more than 3 years, 47% more than 6); most program daily (69%) and review code at least weekly (63%).
Table 2 represents how the participants are distributed across the considered treatments and code changes. The automated assignment algorithm allowed us to obtain a rather balanced number of reviews per treatment and code change.
5.2 RQ1. Priming a not commonly reviewed bug
To investigate our first research question, the participants in our test group (Pr) are primed on a NullPointerException (NPE) bug in a method’s parameter. We expect this type of bug to be missed by most not primed reviewer, because normally reviewers would assume that parameters are checked from the calling function [4, 40, 42].
Table 3 reports the results of the experiment by treatment group. From the first part of the table (primed bug), we can notice that participants in the Pr group found the other NPE bug 62% of the times, while participants in the NPr group only 11%. Expressed in odds, this result means that the NPE defect is 12 times more likely to be found by a participant in the Pr group. The main reasons reported by the participants in the NPr group for missing this bug are that (1) they were too focused on the logic and not thoroughly enough when it comes the corner cases, (2) did not put attention to the fact that Integer could be null, and (3) they generally do not check for NPE, but assume to not receive a wrong object as an input.
As expected, even though NullPointerException has been reported to be the most common bug in Java programs [53], developers stated they rarely sanity check the Object. However, as shown in Table 3, the result drastically changes when a previous reviewer looks for other NPE bugs in the code.
When we look at whether the Pr group was primed by the previous reviewer comment (hence whether they were able to capture the bug because of they have been primed), we have that 40% indicated they were ‘Extremely influenced’, 40% were ‘Very influenced’ and 20% instead were ‘Somewhat influenced’. Hence, the reviewers perceived to have been influenced by the existing comment.
We find a statistically significant relationship (**p < 0.001**, assessed using *χ²*) of strong positive strength (*ϕ = 0.5*) between the
---
**Table 2: Distribution of participants (N = 85) across the various treatment groups.**
<table>
<thead>
<tr>
<th></th>
<th>Primed (Pr)</th>
<th>Not Primed (NPr)</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>CodeChange1</td>
<td>21</td>
<td>17</td>
<td>38</td>
</tr>
<tr>
<td>CodeChange2</td>
<td>22</td>
<td>25</td>
<td>47</td>
</tr>
<tr>
<td><strong>Total</strong></td>
<td><strong>43</strong></td>
<td><strong>42</strong></td>
<td></td>
</tr>
</tbody>
</table>
**Table 3: Odds ratio for capturing the primed and not primed bug in the test (Pr) and control (NPr) group.**
<table>
<thead>
<tr>
<th>Bug Type</th>
<th>Primed (Pr)</th>
<th>Not Primed (NPr)</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>Primed bug (NPE)</td>
<td>found</td>
<td>13</td>
<td>2</td>
</tr>
<tr>
<td></td>
<td>not found</td>
<td>8</td>
<td>15</td>
</tr>
<tr>
<td><strong>Odds Ratio:</strong></td>
<td>12.19</td>
<td>(2.19, 67.94)</td>
<td></td>
</tr>
<tr>
<td><em>p</em> <strong>< 0.001</strong></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Not primed bug</td>
<td>found</td>
<td>14</td>
<td>14</td>
</tr>
<tr>
<td></td>
<td>not found</td>
<td>7</td>
<td>3</td>
</tr>
<tr>
<td><strong>Odds Ratio:</strong></td>
<td>0.43</td>
<td>(0.09, 2.00)</td>
<td></td>
</tr>
<tr>
<td><em>p</em> <strong>= 0.275</strong></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
To investigate our second research question, the participants in our test group (Pr) are primed on an algorithmic bug, more specifically a corner case (CC) bug. The result of this experiment is shown in Table 5. Participants in both groups found the primed bug ~50%. Indeed, the difference is not statistically significant ($p = 0.344$). If we consider whether the test group was primed by the previous reviewer comment, 50% of the participants reported that they were 'Extremely influenced', 10% was 'Somewhat influenced' and 40% was slightly or not influenced; thus suggesting that even the reviewers noticed a lower influence from this comment, even though it was of the same type as one of the other two bugs in the same code change.
Among the main reasons for missing the bug, participants mainly stated that (1) the tests drove them to not remember that corner case, and (2) they focused more on the first one. Hence, given this result we can conclude that the participants who saw the review comment did not find the similar bug more often than the participants that did not see the review comment.
In the second part of Table 5, we indicate whether the participants were able to find the not primed bug. Both the test and control group are very similar in this case, too. Indeed, in both groups the bug is found around 50% of the times and the difference is not statistically significant. When looking at the participants’ comments on why they missed this bug, we have that the main reasons are (1) they forgot to try the specific corner case, and (2) that they assumed tests were covering all the corner cases. The reasons for not capturing the defects were similar in both groups. Given this result, we cannot reject $H_{01}$. Priming the participants on a specific type of bug did not prevent them from capturing the other type of bug.
*Finding 1.* Reviewers primed on a bug that is not commonly considered are more likely to find other occurrences of this type of bugs. However, this does not prevent them in finding also other types of bugs.
### 5.3 RQ2. Priming on an algorithmic bug
To investigate our second research question, the participants in our test group (Pr) are primed on an algorithmic bug, more specifically a corner case (CC) bug. The result of this experiment is shown in Table 4: Regressions for primed and not primed bugs.
<table>
<thead>
<tr>
<th>Variable</th>
<th>Primed bug Estimate</th>
<th>S.E.</th>
<th>Sig.</th>
<th>Not primed bug Estimate</th>
<th>S.E.</th>
<th>Sig.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Intercept</td>
<td>0.704</td>
<td>4.734</td>
<td></td>
<td>-0.893</td>
<td>4.093</td>
<td></td>
</tr>
<tr>
<td>IsPrimed</td>
<td>3.627</td>
<td>1.320</td>
<td>**</td>
<td>-1.199</td>
<td>1.073</td>
<td></td>
</tr>
<tr>
<td>TotalDuration</td>
<td>0.001</td>
<td>0.002</td>
<td></td>
<td>0.003</td>
<td>0.001</td>
<td></td>
</tr>
<tr>
<td>ProfDevExp</td>
<td>0.813</td>
<td>0.557</td>
<td></td>
<td>-0.503</td>
<td>0.554</td>
<td></td>
</tr>
<tr>
<td>ProgramPractice</td>
<td>-0.096</td>
<td>0.828</td>
<td></td>
<td>-0.243</td>
<td>0.736</td>
<td></td>
</tr>
<tr>
<td>ReviewExp</td>
<td>-0.070</td>
<td>0.630</td>
<td></td>
<td>-0.813</td>
<td>0.651</td>
<td></td>
</tr>
<tr>
<td>ReviewPractice</td>
<td>-1.152</td>
<td>0.758</td>
<td></td>
<td>1.243</td>
<td>0.643</td>
<td></td>
</tr>
<tr>
<td>Tired</td>
<td>-0.834</td>
<td>0.832</td>
<td></td>
<td>0.517</td>
<td>0.651</td>
<td></td>
</tr>
<tr>
<td>WorkedHours</td>
<td>-0.069</td>
<td>0.196</td>
<td></td>
<td>0.305</td>
<td>0.207</td>
<td></td>
</tr>
<tr>
<td>Interruptions</td>
<td>-1.752</td>
<td>0.758</td>
<td>*</td>
<td>-0.715</td>
<td>0.444</td>
<td></td>
</tr>
<tr>
<td>... (†)</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
significance codes: † $p < 0.001, ‡ $p < 0.01, †† $p < 0.05, ‡‡ $p < 0.1
(†) Role is not significant and omitted for space reason
### Finding 2. Reviewers primed on an algorithmic bug perceive an influence, but are as likely as the others to find algorithmic bugs. Furthermore, primed participants did not capture fewer bugs of the other type.
Table 6: Regressions for primed and not primed bugs.
<table>
<thead>
<tr>
<th></th>
<th>Predimed bug Estimate</th>
<th>S.E.</th>
<th>Sig</th>
<th>Not primed bug Estimate</th>
<th>S.E.</th>
<th>Sig</th>
</tr>
</thead>
<tbody>
<tr>
<td>Intercept</td>
<td>-1.0151019</td>
<td>.22490623</td>
<td>.</td>
<td>-1.037e-01</td>
<td>.256e-00</td>
<td>.</td>
</tr>
<tr>
<td>IsPrimed</td>
<td>0.9260383</td>
<td>.7223408</td>
<td>.</td>
<td>-1.670e-01</td>
<td>.7740e-01</td>
<td>.</td>
</tr>
<tr>
<td>TotalDuration</td>
<td>0.0018592</td>
<td>.0008958 *</td>
<td>.</td>
<td>9.561e-01</td>
<td>9.976e-04</td>
<td>.</td>
</tr>
<tr>
<td>ProfDevExp</td>
<td>-0.6031309</td>
<td>.3381302</td>
<td>.</td>
<td>-9.437e-02</td>
<td>3.721e-01</td>
<td>.</td>
</tr>
<tr>
<td>ProgramPractice</td>
<td>0.0319636</td>
<td>.5965427</td>
<td>.</td>
<td>-1.061e+00</td>
<td>7.553e-01</td>
<td>.</td>
</tr>
<tr>
<td>ReviewExp</td>
<td>0.3415598</td>
<td>.4548836</td>
<td>.</td>
<td>1.284e+00</td>
<td>4.660e-01</td>
<td>.</td>
</tr>
<tr>
<td>ReviewPractice</td>
<td>0.1531502</td>
<td>.3784747</td>
<td>.</td>
<td>1.211e+00</td>
<td>4.683e-01 **</td>
<td>.</td>
</tr>
<tr>
<td>Tired</td>
<td>0.0883410</td>
<td>.3706685</td>
<td>.</td>
<td>2.486e-01</td>
<td>4.598e-01</td>
<td>.</td>
</tr>
<tr>
<td>WorkedHours</td>
<td>-0.1619234</td>
<td>.1184626</td>
<td>.</td>
<td>2.257e-01</td>
<td>1.542e-01</td>
<td>.</td>
</tr>
<tr>
<td>Interruptions</td>
<td>-0.1755182</td>
<td>.3220796</td>
<td>.</td>
<td>-1.331e-01</td>
<td>3.630e-01</td>
<td>.</td>
</tr>
</tbody>
</table>
(significance codes: ****p < 0.0001, ***p < 0.001, **p < 0.01, *p < 0.05, . p < 0.1
(1) Role is not significant and omitted for space reason
5.4 Robustness Testing
In the previous sections, we presented the results of our study on whether and to what extent reviewers can be primed during code review by showing an existing code review comment. Surprisingly, the results showed that many of our hypotheses were not satisfied: in our experiment, only in one case primed reviewers captured more bugs than the not primed group; in all the other cases, reviewers from both groups could capture the same bugs.
To further challenge the validity of these findings, in this section, we employ robustness testing [32]. For this purpose, we test whether the results we obtained by our baseline model hold when we systematically replace the baseline model specification with the following plausible alternatives.
**Bugs were too simple or too complicated to find.** Choosing the right defects to inject in the code change is fundamental to the validity of our results. If a defect is too easy to find, participants might find the bugs regardless of any other influencing factor, even without paying too much attention to the review (on the other hand, if it is too complicated reviewers might not find any bug and get discouraged to continue). We measure that ~50% of the participants found the three types of defects that we expected them to find, thus ruling out the possibility that these bugs were either too trivial or too difficult to find.
**People were not primed.** The entire experiment is based on the premise that reviewers in the Pr group were correctly primed. Even though we can not verify this premise (the experiment is online, hence there is no interaction between the researchers and the participants), after the code review experiment the participants had to indicate whether they were influenced by the comment of the previous reviewer in capturing the bug. As we stated in Section 5.2 and Section 5.3, 70% of the participants indicated they were extremely or very influenced (12% were neutral). This gives an indication that the participants felt they were indeed primed, but this did not influence their ability to find other bugs.
Nevertheless, the reported level of being influenced is subjective, so not fully reliable (participants could think they have been influenced, but were not). To triangulate this result, we test another possibility: More specifically, one of the possible explanations of why participants may not have been primed is that our sample of participants was “immune” to priming or very difficult to prime. Indeed, there is no study that confirms that developers are as affected by priming as the general population (on which past experiment was conducted). To rule out this possibility, we devised the psychological experiment: We tested whether developers can also be primed as expected using visual stimuli. Our results show that ~70% of the participants were primed as expected.
**Not enough participants.** Another possibility of why we do not find a difference is that we did not have enough participants. Even though 85 participants is quite large in comparison to many experiments in software engineering [8] and we tried to design an experiment that would create a strong signal, we cannot rule out that the significance was missing due to the number of participants. However, even if the results were statistically significant (assuming we had the same ratios, but an order of magnitude more of participants), the size of the effect (calculated using the φ coefficient) would be ‘none to very negligible’. This suggests that there was no emerging trend and that, even having more participants, we could have probably obtained a significant, yet trivial effect.
**Some participants did not perform the task seriously.** Finally, one of the reasons why we did not confirm most of our hypotheses could be that some participants did not take the task seriously, hence they might have performed poorly and have altered the results. Having used a random assignment and having a reasonably large number of participants, we have no reason to think that one group had more ‘lazy’ participants than the others. Moreover, as we discussed in Section 3, to exclude participants who did not take the experiment seriously, we filtered out experiments without any comments in the review (even if there were comments, the first author manually validated them to check whether they were appropriate and they were/not capturing a bug); we also did not consider reviews that took less than five minutes to be completed, or that were not completed at all (maybe because the participant left after few minutes).
Alternatively, it would be possible that participants who were more serious focused more and found more bugs (regardless of the priming), while less serious ones would just find one and leave the experiment. To test also this possibility, we compared the likelihood of a participant in finding a second bug when a first one was found. Also in this case, we did not find any statistically significant effect, thus ruling out this hypothesis as well.
6 DISCUSSIONS
We discuss the main implications and results of our study.
**Robustness of code review against availability bias.** The current code review practice expects reviewers to review and comment on the code change asynchronously, and reviewers’ comments are immediately visible to both the author and other reviewers.
One of the main hypotheses we stated in our study is that the code review outcome is biased because reviewers are primed by the visibility of existing comment on a bug. Indeed, if reviewers get primed by previously made comments about some bug(s), then they could find more bugs of that specific type while overlooking other types of bugs. This would, in turn, undermine the effectiveness of the code review process, creating a demand for a different approach.
To create a different approach, one might consider adopting a review method similar to that of scientific venues where reviewers do not see the comments of the other reviewers until they submit their review. Even though this strategy would reduce the transparency of the code review process undermining knowledge transfer, team awareness, as well as shared code ownership, and would probably lead to a loss in review efficiency due to duplicate bug detection, it would be necessary if the biasing effect of other reviewers’ comments would be strong.
Our experiment results show that the participants in the test group were positively influenced by the existing comment on the code change so that they could capture more bugs of the same type. However, unexpectedly, they were still able to capture the bugs of the different type as the control group did. Like any human reviewers are also prone to availability bias [21] to various extents. However, our results did not find evidence of a strong negative effect of reviewers’ availability bias. Therefore, our data does not provide any evidence that would justify a change in the current code review practices.
Existing comments on normally not considered bugs act as (positive) reminders rather than (negative) primers. Surprisingly, participants in the test group who were primed with the algorithmic bug type (more specifically, a corner case bug) detected the same amount of corner case and NullPointerException (NPE) bugs as the participants in the control group. However, participants who were primed with a bug that is normally not considered in review (i.e., NPE) were 12 times more likely to capture this type of bug than the participants of the control group.
This result shows that existing reviewer comments on code change seem to support recalling (i.e., act as a reminder), rather than distracting the reviewer. As previously mentioned in section 5.2, participants in the test group indicated that they were focused on to the corner cases in the code change and did not put attention to the possibility that Integer could be null. Such feedbacks are in-line with the possible existence of anchoring bias [21, 47].
It is likely that the existence of a reviewer comment on an uncommon bug had a de-biasing effect on the participants in the test group (i.e., mitigated the participants’ bias). In software engineering literature, there are empirical studies on practitioners’ anchoring bias. For instance, Pitts and Brown [37] provide procedural prompts during requirements elicitation to aid analysts not anchoring on currently available information. According to the findings by Jain et al. [19], pair programming novices tend to anchor to their initial solutions due to their inability to identify such a wider range of solutions. However, to the best of our knowledge, there are no studies on anchoring bias within the context of code reviews. Therefore, further research is required to investigate underlying cognition mechanisms that can explain why existing reviewer comments on the unexpected bug act as reminders.
7 CONCLUSIONS
We investigated robustness of peer code review against reviewers’ proneness to availability bias. For this purpose, we conducted an online experiment with 85 participants. Although the majority of the participants (i.e., ≈70%) were assessed to be prone to availability bias (median = 3.8, max = 4), we did not observe any priming effect of existing review comments on bugs. However, reviewers primed on bugs not normally considered in code review were more likely to find more of this type of bugs. Hence, existing comments on this type of bugs acted as reminders rather than primers.
ACKNOWLEDGMENTS
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 642954. A. Bacchelli gratefully acknowledge the support of the Swiss National Science Foundation through the SNF Project No. PP00P2_170529.
REFERENCES
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/69829225/priming.pdf", "len_cl100k_base": 13853, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 42234, "total-output-tokens": 15841, "length": "2e13", "weborganizer": {"__label__adult": 0.00044035911560058594, "__label__art_design": 0.00033736228942871094, "__label__crime_law": 0.0003123283386230469, "__label__education_jobs": 0.001964569091796875, "__label__entertainment": 5.608797073364258e-05, "__label__fashion_beauty": 0.0001747608184814453, "__label__finance_business": 0.0002038478851318359, "__label__food_dining": 0.0003104209899902344, "__label__games": 0.0006585121154785156, "__label__hardware": 0.00048828125, "__label__health": 0.0003788471221923828, "__label__history": 0.00017917156219482422, "__label__home_hobbies": 8.463859558105469e-05, "__label__industrial": 0.00023245811462402344, "__label__literature": 0.00029969215393066406, "__label__politics": 0.00023293495178222656, "__label__religion": 0.00038552284240722656, "__label__science_tech": 0.0030307769775390625, "__label__social_life": 0.00011920928955078124, "__label__software": 0.003993988037109375, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00031495094299316406, "__label__transportation": 0.0003888607025146485, "__label__travel": 0.00018703937530517575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67630, 0.04411]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67630, 0.28985]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67630, 0.93098]], "google_gemma-3-12b-it_contains_pii": [[0, 730, false], [730, 5930, null], [5930, 12651, null], [12651, 19233, null], [19233, 25488, null], [25488, 30029, null], [30029, 36755, null], [36755, 43428, null], [43428, 47853, null], [47853, 51846, null], [51846, 59455, null], [59455, 67630, null], [67630, 67630, null]], "google_gemma-3-12b-it_is_public_document": [[0, 730, true], [730, 5930, null], [5930, 12651, null], [12651, 19233, null], [19233, 25488, null], [25488, 30029, null], [30029, 36755, null], [36755, 43428, null], [43428, 47853, null], [47853, 51846, null], [51846, 59455, null], [59455, 67630, null], [67630, 67630, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67630, null]], "pdf_page_numbers": [[0, 730, 1], [730, 5930, 2], [5930, 12651, 3], [12651, 19233, 4], [19233, 25488, 5], [25488, 30029, 6], [30029, 36755, 7], [36755, 43428, 8], [43428, 47853, 9], [47853, 51846, 10], [51846, 59455, 11], [59455, 67630, 12], [67630, 67630, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67630, 0.19398]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
8e07525819f650584f8b5fc507408f018114d346
|
[REMOVED]
|
{"Source-Url": "https://inria.hal.science/hal-01382052/file/main.pdf", "len_cl100k_base": 9992, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 49153, "total-output-tokens": 12627, "length": "2e13", "weborganizer": {"__label__adult": 0.00034546852111816406, "__label__art_design": 0.0005292892456054688, "__label__crime_law": 0.0008258819580078125, "__label__education_jobs": 0.0012369155883789062, "__label__entertainment": 0.0001042485237121582, "__label__fashion_beauty": 0.00018489360809326172, "__label__finance_business": 0.0005154609680175781, "__label__food_dining": 0.00034499168395996094, "__label__games": 0.0008668899536132812, "__label__hardware": 0.0031604766845703125, "__label__health": 0.0005812644958496094, "__label__history": 0.0004329681396484375, "__label__home_hobbies": 0.00015115737915039062, "__label__industrial": 0.0014181137084960938, "__label__literature": 0.0003268718719482422, "__label__politics": 0.0003898143768310547, "__label__religion": 0.0005068778991699219, "__label__science_tech": 0.370361328125, "__label__social_life": 9.888410568237303e-05, "__label__software": 0.0199432373046875, "__label__software_dev": 0.5966796875, "__label__sports_fitness": 0.00026226043701171875, "__label__transportation": 0.0006351470947265625, "__label__travel": 0.0001621246337890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48833, 0.03078]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48833, 0.51921]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48833, 0.89235]], "google_gemma-3-12b-it_contains_pii": [[0, 1034, false], [1034, 4138, null], [4138, 7277, null], [7277, 10839, null], [10839, 14531, null], [14531, 17853, null], [17853, 21218, null], [21218, 24431, null], [24431, 27104, null], [27104, 30270, null], [30270, 33477, null], [33477, 35364, null], [35364, 38796, null], [38796, 41834, null], [41834, 45031, null], [45031, 48833, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1034, true], [1034, 4138, null], [4138, 7277, null], [7277, 10839, null], [10839, 14531, null], [14531, 17853, null], [17853, 21218, null], [21218, 24431, null], [24431, 27104, null], [27104, 30270, null], [30270, 33477, null], [33477, 35364, null], [35364, 38796, null], [38796, 41834, null], [41834, 45031, null], [45031, 48833, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48833, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48833, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48833, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48833, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48833, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48833, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48833, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48833, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48833, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48833, null]], "pdf_page_numbers": [[0, 1034, 1], [1034, 4138, 2], [4138, 7277, 3], [7277, 10839, 4], [10839, 14531, 5], [14531, 17853, 6], [17853, 21218, 7], [21218, 24431, 8], [24431, 27104, 9], [27104, 30270, 10], [30270, 33477, 11], [33477, 35364, 12], [35364, 38796, 13], [38796, 41834, 14], [41834, 45031, 15], [45031, 48833, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48833, 0.02643]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
7462e0f6fe9f9227d3ed539c92c83724fac37939
|
[REMOVED]
|
{"Source-Url": "https://lat.inf.tu-dresden.de/research/papers/2003/HlaSat-CADE03.pdf", "len_cl100k_base": 12859, "olmocr-version": "0.1.48", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 80431, "total-output-tokens": 15605, "length": "2e13", "weborganizer": {"__label__adult": 0.000499725341796875, "__label__art_design": 0.0007042884826660156, "__label__crime_law": 0.000644683837890625, "__label__education_jobs": 0.0013685226440429688, "__label__entertainment": 0.00023317337036132812, "__label__fashion_beauty": 0.00027632713317871094, "__label__finance_business": 0.000385284423828125, "__label__food_dining": 0.0008578300476074219, "__label__games": 0.0018644332885742188, "__label__hardware": 0.0014209747314453125, "__label__health": 0.0009908676147460938, "__label__history": 0.0005736351013183594, "__label__home_hobbies": 0.0002200603485107422, "__label__industrial": 0.0011968612670898438, "__label__literature": 0.0013170242309570312, "__label__politics": 0.000576019287109375, "__label__religion": 0.0011043548583984375, "__label__science_tech": 0.369384765625, "__label__social_life": 0.00018358230590820312, "__label__software": 0.00969696044921875, "__label__software_dev": 0.6044921875, "__label__sports_fitness": 0.00042057037353515625, "__label__transportation": 0.0012159347534179688, "__label__travel": 0.0002796649932861328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48634, 0.01822]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48634, 0.62137]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48634, 0.81599]], "google_gemma-3-12b-it_contains_pii": [[0, 2642, false], [2642, 5811, null], [5811, 10078, null], [10078, 13067, null], [13067, 16594, null], [16594, 19423, null], [19423, 23115, null], [23115, 26263, null], [26263, 29360, null], [29360, 33566, null], [33566, 36874, null], [36874, 39957, null], [39957, 42420, null], [42420, 45335, null], [45335, 48634, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2642, true], [2642, 5811, null], [5811, 10078, null], [10078, 13067, null], [13067, 16594, null], [16594, 19423, null], [19423, 23115, null], [23115, 26263, null], [26263, 29360, null], [29360, 33566, null], [33566, 36874, null], [36874, 39957, null], [39957, 42420, null], [42420, 45335, null], [45335, 48634, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48634, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48634, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48634, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48634, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48634, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48634, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48634, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48634, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48634, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48634, null]], "pdf_page_numbers": [[0, 2642, 1], [2642, 5811, 2], [5811, 10078, 3], [10078, 13067, 4], [13067, 16594, 5], [16594, 19423, 6], [19423, 23115, 7], [23115, 26263, 8], [26263, 29360, 9], [29360, 33566, 10], [33566, 36874, 11], [36874, 39957, 12], [39957, 42420, 13], [42420, 45335, 14], [45335, 48634, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48634, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
9eb66aa30753f2e30798829d131a84b476845eb2
|
[REMOVED]
|
{"Source-Url": "http://portal.research.lu.se/ws/files/6317746/4586665.pdf", "len_cl100k_base": 13047, "olmocr-version": "0.1.49", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 59306, "total-output-tokens": 16082, "length": "2e13", "weborganizer": {"__label__adult": 0.000522613525390625, "__label__art_design": 0.0004589557647705078, "__label__crime_law": 0.002689361572265625, "__label__education_jobs": 0.0005140304565429688, "__label__entertainment": 0.00011533498764038086, "__label__fashion_beauty": 0.00021314620971679688, "__label__finance_business": 0.0002980232238769531, "__label__food_dining": 0.00039124488830566406, "__label__games": 0.001399993896484375, "__label__hardware": 0.002880096435546875, "__label__health": 0.0006618499755859375, "__label__history": 0.000347137451171875, "__label__home_hobbies": 0.0001709461212158203, "__label__industrial": 0.0007224082946777344, "__label__literature": 0.0003058910369873047, "__label__politics": 0.0004191398620605469, "__label__religion": 0.0005230903625488281, "__label__science_tech": 0.10797119140625, "__label__social_life": 0.00010901689529418944, "__label__software": 0.0187225341796875, "__label__software_dev": 0.859375, "__label__sports_fitness": 0.00035762786865234375, "__label__transportation": 0.00058746337890625, "__label__travel": 0.0002014636993408203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56677, 0.06197]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56677, 0.58853]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56677, 0.87422]], "google_gemma-3-12b-it_contains_pii": [[0, 1197, false], [1197, 3618, null], [3618, 7045, null], [7045, 8963, null], [8963, 11386, null], [11386, 14368, null], [14368, 17713, null], [17713, 19439, null], [19439, 22103, null], [22103, 24359, null], [24359, 26812, null], [26812, 28298, null], [28298, 31200, null], [31200, 34584, null], [34584, 37633, null], [37633, 40787, null], [40787, 44129, null], [44129, 47195, null], [47195, 50827, null], [50827, 54499, null], [54499, 55878, null], [55878, 56677, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1197, true], [1197, 3618, null], [3618, 7045, null], [7045, 8963, null], [8963, 11386, null], [11386, 14368, null], [14368, 17713, null], [17713, 19439, null], [19439, 22103, null], [22103, 24359, null], [24359, 26812, null], [26812, 28298, null], [28298, 31200, null], [31200, 34584, null], [34584, 37633, null], [37633, 40787, null], [40787, 44129, null], [44129, 47195, null], [47195, 50827, null], [50827, 54499, null], [54499, 55878, null], [55878, 56677, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56677, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56677, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56677, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56677, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56677, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56677, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56677, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56677, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56677, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56677, null]], "pdf_page_numbers": [[0, 1197, 1], [1197, 3618, 2], [3618, 7045, 3], [7045, 8963, 4], [8963, 11386, 5], [11386, 14368, 6], [14368, 17713, 7], [17713, 19439, 8], [19439, 22103, 9], [22103, 24359, 10], [24359, 26812, 11], [26812, 28298, 12], [28298, 31200, 13], [31200, 34584, 14], [34584, 37633, 15], [37633, 40787, 16], [40787, 44129, 17], [44129, 47195, 18], [47195, 50827, 19], [50827, 54499, 20], [54499, 55878, 21], [55878, 56677, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56677, 0.21587]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
ed7af7793767425dab0a0f739dd123aa2a564979
|
Control Jujutsu: On the Weaknesses of Fine-Grained Control Flow Integrity
Isaac Evans
MIT Lincoln Laboratory
ine@mit.edu
Fan Long
MIT CSAIL
fanl@csail.mit.edu
Ulziibayar Otgonbaatar
MIT CSAIL
ulziibay@csail.mit.edu
Howard Shrobe
MIT CSAIL
hes@csail.mit.edu
Martin Rinard
MIT CSAIL
rinard@csail.mit.edu
Hamed Okhravi
MIT Lincoln Laboratory
hamed.okhravi@ll.mit.edu
Stelios Sidiroglou-Douskos
MIT CSAIL
stelios@csail.mit.edu
ABSTRACT
Control flow integrity (CFI) has been proposed as an approach to defend against control-hijacking memory corruption attacks. CFI works by assigning tags to indirect branch targets statically and checking them at runtime. Coarse-grained enforcements of CFI that use a small number of tags to improve the performance overhead have been shown to be ineffective. As a result, a number of recent efforts have focused on fine-grained enforcement of CFI as it was originally proposed. In this work, we show that even a fine-grained form of CFI with unlimited number of tags and a shadow stack (to check calls and returns) is ineffective in protecting against malicious attacks. We show that many popular code bases such as Apache and Nginx use coding practices that create flexibility in their intended control flow graph (CFG) even when a strong static analyzer is used to construct the CFG. These flexibilities allow an attacker to gain control of the execution while strictly adhering to a fine-grained CFI. We then construct two proof-of-concept exploits that attack an unlimited tag CFI system with a shadow stack. We also evaluate the difficulties of generating a precise CFG using scalable static analysis for real-world applications. Finally, we perform an analysis on a number of popular applications that highlights the availability of such attacks.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org.
1. INTRODUCTION
Memory corruption bugs continue to be a significant problem for unmanaged languages such as C/C++ [7, 17, 53]. The level of control provided by unmanaged languages, such as explicit memory management and low level hardware control, makes them ideal for systems development. Unfortunately, this level of control, bears a heavy cost: lack of memory safety [53]. Lack of memory safety, in turn, forms the basis for attacks in the form of code injection [40] and code reuse [17, 49]. Retrofitting memory safety to C/C++ applications can introduce prohibitive overhead (up to 4x slowdown) [37] and/or may require significant programmer involvement in the form of annotations [30, 38].
As a result, the past three decades of computer security research have created a continuous arms race between the development of new attacks [12, 13, 21, 40, 49, 50] and the subsequent development of corresponding defenses [4, 19, 31, 41, 54]. This arms race attempts to strike a balance between the capabilities of the attackers and the overhead, compatibility, and robustness of the defenses [53].
The wide spread deployment of defenses such as Data Execution Prevention (DEP) [35, 41], address space layout randomization (ASLR) [54] and stack smashing protection (SSP) [19] has driven the evolution, and sophistication, of attacks. Information leakage attacks [12, 47, 52] enable the construction of multi-step attacks that bypass ASLR and SSP, while code reuse attacks, such as return-oriented program (ROP) [49], jump-oriented programming (JOP) [13], and return-to-libc [56] can be used to circumvent DEP.
The majority of the attacks rely on some form of control hijacking [53] to redirect program execution. Control Flow Integrity (CFI) is a runtime enforcement technique that provides practical protection against code injection, code reuse, and is not vulnerable to information leakage attacks [4, 61, 62]. CFI provides runtime enforcement of the intended control flow transfers by disabling transfers that are not present in the application’s Control Flow Graph (CFG). CFGs are constructed either by analyzing the source code [55], or, less accurately, by analyzing the disassembled binary [61]. The enforcement is done by assigning tags to indirect branch targets and checking that indirect control transfers point to valid tags.
Precise enforcement of CFI, however, can introduce significant overhead [4, 5]. This has motivated the development of more practical, coarse-grained, variants of CFI that have lower performance overhead but enforce weaker restrictions (i.e., limit the number of tags) [61, 62]. For example, control transfer checks are relaxed to allow transfers to any valid jump targets as opposed to the correct target. Unfortunately, these implementations have been shown to be ineffective as they allow enough valid transfers to enable an attacker to build a malicious payload [22].
As a result of the attacks on coarse-grained variants of CFI, researchers have focused on fine-grained, yet still practical enforcement of CFI. For example, forward-edge CFI [55] enforces a fine-grained CFI on forward-edge control transfers (i.e., indirect calls, but not returns). Cryptographically enforced CFI [34] enforces another form of fine-grained CFI by adding message authentication code (MAC) to control flow elements which prevents the usage of unintended control transfers in the CFG. Opaque CFI (OCFI) [36] enforces fine-grained CFI by transforming branch target checks to bounds checking (possible base and bound of allowed control transfers).
The security of fine-grained CFI techniques is contingent on the ability to construct CFGs that accurately capture the intended control transfers permitted by the application. For C/C++ applications, even with access to source code, this assumption is tenuous at best. In theory, the construction of an accurate CFG requires the use of a precise (sound and complete) pointer analysis. Unfortunately, sound and complete points-to analysis is undecidable [43]. In practice, pointer analysis can be made practical by either adopting unsound techniques or reducing precision (incomplete). Unsound techniques may report fewer connections (tags), which can result in false positives when used in CFI. Given that false positives can interfere with the core program functionality, researchers have focused on building sound but incomplete pointer analysis.
Incomplete analysis leads to conservative over-approximate results. The analysis will conservatively report more connections (i.e., when two pointers may alias). While using incomplete pointer analysis may be sufficient for most program analysis tasks, we show that it is insufficient under adversarial scenarios. The accuracy of the pointer analysis is further exacerbated by the use of common C idioms and software engineering practices that hinder the use of accurate and scalable program analysis techniques.
We present a novel attack, Control Jujutsu 1, that exploits the incompleteness of pointer analysis, when combined with common software engineering practices, to enable an attacker to execute arbitrary malicious code even when fine-grained CFI is enforced. The attack uses a new “gadget” class that we call Argument Corruptible Indirect Call Site (ACICS). ACICS gadgets are pairs of Indirect Call Sites (ICS) and target functions that enable Remote Code Execution (RCE) while respecting a CFG enforced using fine-grained CFI. Specifically, ACICS gadgets 1) enable argument corruption of indirect call sites (data corruption) that in conjunction with the corruption of a forward edge pointer 2) can direct execution to a target function that when executed can exercise remote code execution (e.g., system calls). We show that for modern, well engineered applications, ACICS gadgets are readily available as part of the intended control transfer.
To demonstrate our attack, we construct two proof-of-concept exploits against two popular web servers, Apache HTTPD and Nginx. We assume that the servers are protected using fine-grained CFI (unlimited tags), to enforce only intended control transfers on the forward-edge (i.e., indirect calls/jumps), and a shadow stack to protect the backward-edge (i.e., returns). For the forward edge, the CFG is constructed using the state-of-the-art Data Structure Analysis (DSA) [33] pointer analysis algorithm. For the backward edge, the shadow stack provides a sound and complete dynamic analysis (i.e., there is no imprecision). We show that even under this scenario, which is arguably stronger than any of the available fine-grained CFI implementations, an attacker can perform a control hijacking attack while still operating within the intended CFG.
To evaluate the prevalence, and exploitability, of ACICS gadgets, we evaluate 4 real-world applications. The results show that ACICS gadgets are prevalent and provide a rich target for attackers. Our results indicate that in the absence of data integrity, which is hard to achieve for practical applications, fine-grained CFI is insufficient protection against a motivated attacker.
This paper makes the following contributions:
- **Control Jujutsu**: We present Control Jujutsu, a new attack on fine-grained CF that exploits the incompleteness of pointer analysis, when combined with common software engineering practices, to enable an attacker to execute arbitrary malicious code.
- **ACICS gadgets**: We introduce a new “gadget” class, ACICS, that enables control hijacking attacks for applications protected using fine-grained CFI.
- **Proof-of-Concept Exploits**: We present two proof-of-concept exploits against Apache HTTPD and Nginx protected using fine-grained CFI with forward and backward-edge protection.
- **Experimental Results**: We present experimental results that characterize the prevalence of ACICS gadgets in real-world applications.
2. **EXAMPLE EXPLOIT**
We next present an example that illustrates how Control Jujutsu utilizes ACICS gadgets in conjunction with the imprecision of the DSA pointer analysis algorithm to create an RCE attack on Apache 2.4.12, a popular web server.
2.1 Threat Model
The threat model in this paper is a remote attacker trying to hijack control of a machine by exploiting memory vulnerabilities. We assume the system is protected by fine-grained CFI with unlimited tags for the forward edge and a shadow stack implementation for the backward edge. We also assume the deployment of DEP and ASLR. These assumptions are consistent with the literature on code reuse attacks. Finally, we assume the availability of a memory corruption vulnerability that allows an attacker to corrupt certain values on stack or heap. As numerous past vulnerabilities have shown, this assumption is realistic. It is also weaker than an arbitrary attacker read/write assumption made in the related work [31].
2.2 ICS Discovery
Control Jujutsu begins with a search for suitable ICS sites for the ACICS gadget. Control Jujutsu identifies the following requirements for ICS locations:
1. The forward edge pointer and its argument(s) should reside on the heap or a global variable to facilitate attacks from multiple data flows.
---
1Jujutsu is a Japanese martial art in which an opponent’s force is manipulated against himself rather than using one’s own force. In Control Jujutsu, an application’s intended controls are manipulated against it.
Figure 1: APR hook macro in server/request.c:97 defining `ap_run_dirwalk_stat()` in Apache HTTPD and the simplified code snippet of `ap_run_dirwalk_stat()`
```c
AP_IMPLEMENT_HOOK_RUN_FIRST(apr_status_t, dirwalk_stat,
(apr_finfo_t *finfo, request_rec *r,
apr_int32_t wanted), AP_DECLINED)
ap_status_t ap_run_dirwalk_stat(
apr_finfo_t *finfo, request_rec *r,
apr_int32_t wanted)
{
ap_LINK_dirwalk_stat_t *pHook;
int n;
apr_status_t rv = AP_DECLINED;
...
// check the corresponding field of the global _hooks
if (_hooks.link_dirwalk_stat) {
pHook = (_hooks.link_dirwalk_stat) + n;
// invoke registered functions in the array one by one until a function returns a non-decline value.
for (n=0; n < _hooks.link_dirwalk_stat->nelts; n++){
...
// our selected ICS
if (rv != AP_DECLINED) break;
}
}
...
return rv;
}
```
Figure 2: `dirwalk_stat` called in server/request.c:616 in Apache HTTPD
1. The arguments at the ICS can be altered without crashing the program (before reaching a target function).
2. The ICS should be reachable from external input (e.g., a network request).
Using these requirements, we found many viable ACICS candidates which we discuss at length in section 5.1. Here we present a detailed example exploit based on the selected ICS seen in Figure 1. Lines 1-5 use a macro defined in the Apache Portable Runtime (APR) library to define the function `ap_run_dirwalk_stat()`. Lines 7-30 present the simplified code snippet of `ap_run_dirwalk_stat()` after macro expansion. The actual ICS itself occurs at line 23, which invokes the function pointer `pHook[n].pFunc`. Figure 2 presents the specific `ap_run_dirwalk_stat()` call we use in our exploit.
Apache HTTPD uses a design pattern that facilitates modularity and extensibility. It enables Apache module developers to register multiple implementation function hooks to extend core Apache functionality. `ap_run_dirwalk_stat()` is a wrapper function that iteratively calls each registered implementation function until an implementation function returns a value other than `AP_DECLINED`.
### 2.3 Target Selection
Next, Control Jujutsu searches the application for candidate target sites for the ACICS gadgets. Control Jujutsu identifies target functions that exercise behavior equivalent to a RCE (e.g., `system` or `exec` calls).
In this example, the `piped_log_spawn` function meets and exceeds all of our requirements. Apache allows a configuration file to redirect the Apache logs to a pipe rather than a file; this is commonly used by system administrators to allow transparent scheduled log rotation. This functionality involves Apache reading its configuration file, launching the program listed in the configuration file along with given arguments, and then connecting the program’s standard input to Apache’s log output.
Figure 3 presents a simplified version of the example target function, `piped_log Spawn`. This target function accepts a pointer to the `piped_log` structure as an argument. `piped_log spawn` invokes an external process found in the path `program` field of the `piped_log` structure.
The `piped_log` structure has a similar layout to many other Apache structure types, which significantly expands the number of viable ACICS that can reach it without a crash. This is because many Apache structs also have an entry with type `apr_pool_t` as their first field, so that value will not need to be overwritten. This also eliminates the need to leak valid memory values for the `apr_pool_t` field, which must be valid for our example attack to succeed.
### 2.4 Exploit Generation:
Next, Control Jujutsu constructs the exploit as follows:
1. Use a heap memory corruption vulnerability to corrupt an entry in the `_hooks` structure’s `link_dirwalk_stat` field to point to `piped_log_spawn`.
2. Use the same vulnerability to corrupt the struct in the `request_rec->finfo` field such that, when viewed as a `piped_log` struct, the fields `read_fd` and `write_fd` are null, and the field `program` points to a string with the name and arguments of the program we intend to invoke (e.g., `"/bin/sh -c ..."`).
### 2.5 CFG Construction
Next, Control Jujutsu examines the CFG to ensure that the ACICS sites we identified using a tool described in Section 4 can be redirected to the target site. In our example, the CFG constructed by the DSA algorithm [33] allows the ICS located at `dirwalk_stat` to point to the target function `piped_log spawn`. In the next section, we describe why DSA, a context-sensitive and field-sensitive analysis, was not able to construct a CFG that can be used by fine-grained CFI to stop the attack.
The code shown in Figure 1 and Figure 4 is generated from macro templates found in Apache Portable Runtime library. For example, for a functionality malicious, there are pairs of functions ap_hook_malicious() and ap_run_malicious() that are structurally similar to the code shown in Figure 1 and Figure 4. This imposes a significant additional precision requirement on the static analysis, as it needs to consider a (potentially) large number of similar functions that can manipulate the data structures inside _hooks.
3. BUILDING CONTROL FLOW GRAPHS WITH STATIC ANALYSIS
The construction of a precise CFG requires a pointer analysis [6, 23, 24, 26, 33, 42, 45, 51, 59] to determine the set of functions to which the pointer at each indirect call site (e.g., line 23 in Figure 1) can point.
Figure 4 presents a simplified version of ap_hook_dirwalk_stat(), which registers implementation functions that ap_run_dirwalk_stat() (shown in Figure 1) can later invoke for the functionality of dirwalk_stat. The intended behavior of the ICS shown at line 23 in Figure 1 is to only call implementation functions registered via ap_hook_dirwalk_stat() in Figure 4.
The first argument pf of ap_hook_dirwalk_stat() is the function pointer to an implementation function of dirwalk_stat. It has the type ap_HOOK_dirwalk_t, which corresponds to the function signature for dirwalk_stat. ap_hook_dirwalk_stat() stores the function pointer to the APR array _hooks.link_dirwalk_stat. ap_LINK_dirwalk_stat_t (line 3 in Figure 1) represents the type of each array entry.
The function ap_run_dirwalk_stat() (line 7 in Figure 1) iterates over the APR array _hook.link_dirwalk_stat and runs each implementation function until an implementation function returns a value other than APDECLINED.
The example code in Figure 1 and Figure 4 highlights the following challenges for the static analysis:
- **Global Struct:** The analysis has to distinguish between different fields in global variables. _hooks in Figure 1 and Figure 4 is a global struct variable in Apache HTTPD. Each field of _hooks contains an array of function pointers to registered implementation functions for a corresponding functionality. For example, the link_dirwalk_stat field contains function pointers to implementation functions of the functionality dirwalk_stat.
- **Customized Container API:** The analysis has to capture inter-procedural data flows via customized container APIs. The code in Figure 1 and Figure 4 uses customized array APIs apr_array_push() and apr_array_make() to store and manipulate function pointers.
- **Macro Generated Code:** The code shown in Figure 1 and Figure 4 is generated from macro templates found in Apache Portable Runtime library. For example, for a functionality malicious, there are pairs of functions ap_hook_malicious() and ap_run_malicious() that are structurally similar to the code shown in Figure 1 and Figure 4. This imposes a significant additional precision requirement on the static analysis, as it needs to consider a (potentially) large number of similar functions that can manipulate the data structures inside _hooks.
3.1 Static Analysis: Knobs and Trade-offs
Precise (sound and complete) pointer analysis is undecidable [43]. Unsound pointer analysis may generate a CFG that misses legitimate indirect transfers, which may ultimately lead CFI to report false positives. Breaking program functionality is typically undesirable (see Section 3.3).
Researchers instead focus on sound but incomplete pointer analysis algorithms [6, 23, 24, 26, 33, 42, 45, 51, 59] that conservatively report more connections. For example, two pointers may alias and an indirect call site may call a function. The hope is that such imprecision could be controlled and that the analysis could be accurate enough so that the generated CFG still does not contain malicious connections.
Another important design decision for pointer analysis algorithms is scalability [25]. Standard pointer analysis algorithms for C programs have three important knobs that control the trade-offs between accuracy and scalability: context-sensitivity, field sensitivity, and flow sensitivity.
**Context Sensitivity:** A context-sensitive analysis [33, 51, 59] is able to distinguish between different invocations of a function at different call sites. It tracks local variables, arguments, and return values of different function invocations, at different call sites separately. A context-insensitive analysis, in contrast, does not distinguish between different invocations of a function, i.e., analysis results for local variables, arguments, and the return values from different invocations of the function are merged together.
Previous work in the programming language community has shown that context sensitivity is indispensable for obtaining precise pointer analysis results in real world applications [25, 33, 51, 59], because it eliminates a large set of unrealizable information propagation paths where calls and returns do not match. Context sensitivity is especially important for analyzing C programs that implement customized memory management functions or manipulate generic data structures with common interfaces, because otherwise all pointer values returned by each memory management or data structure function will be aliased (to each other).
For our example in Figure 4, context sensitivity is also important. A context-insensitive analysis will merge the analysis results of the return value of different invocations of apr_array_push(). Therefore a context-insensitive analysis will incorrectly determine that phook at line 9 in Figure 4 may alias to pHook in another implementation registration function such as ap_hook_malicious() (recall that all implementation registration functions are generated with macro templates), because both are equal to a return value of apr_array_push() (albeit from different invocations). Eventually, this imprecision will cause the analysis to determine that the indirect call at line 23 in Figure 1 may call to an implementation function registered via ap_hook_malicious(), because the analysis conservatively determines that the function pointer argument value in ap_hook_malicious() may flow to pHook->pFunc via the aliased phook pointer.
Unfortunately, context-sensitive pointer analysis is expensive for large real-world applications. Full context-sensitive analysis is also undecidable for programs that contain recursions [44]. Standard clone-based context-sensitive pointer analysis [59] duplicates each function in a program multiple times to distinguish different invocations of the function. This unfortunately will increase the
size of the analyzed program exponentially. The DSA algorithm uses bottom-up and top-down algorithms to traverse the call graph of a program and summarizes context-sensitive analysis results into a unification-based data structure graph [33]. It produces slightly less accurate results than clone-based algorithms but avoids an exponential blow up on real world programs.
Field Sensitivity: A field-sensitive analysis [33, 42] is able to distinguish different fields of a struct in C programs, while a field-insensitive analysis treats the whole struct as a single abstract variable. Modifications to different fields are transformed into weak updates to the same abstract variable, where the analysis conservatively assumes that each of the modifications may change the value of the abstract variable.
For our example in Figure 1 and Figure 4, field sensitivity is important. A field-insensitive analysis treats the global struct_hooks as a single abstract variable, so that it cannot distinguish the field link_dirwalk_stat from other fields in Hooks such as link_malicious. Therefore the analysis conservatively determines that the assignment at lines 16-17 in Figure 1 may retrieve an array that contains function pointers for other functionalities like malicious. This causes the analysis to eventually determine that the indirect call at line 23 in Figure 1 may make call to any implementation function registered via ap_hook_malicious().
Field-sensitive pointer analysis is hard for C programs due to the lack of type-safety. Pointer casts are ubiquitous, and unavoidable for low-level operations such as memcpy(). Field-sensitive analysis algorithms [33, 42] typically have a set of hand-coded rules to handle common code patterns of pointer casts. When such rules fail for a cast of a struct pointer, the analysis has to conservatively merge all fields associated with the struct pointer into a single abstract variable and downgrade into a field-insensitive analysis for the particular struct pointer.
Flow Sensitivity: A flow-sensitive analysis considers the execution order of the statements in a function [23, 26, 45], while a flow-insensitive analysis conservatively assumes that the statements inside a function may execute in arbitrary order. Flow sensitivity typically improves pointer-analysis accuracy but when combined with context-sensitive analysis it can lead to scalability issues. To the best of our effort, we are unable to find any publicly available context-sensitive flow-sensitive pointer analysis that can scale to server applications such as Apache HTTPD. A common practice to improve the accuracy of a flow-insensitive analysis is to apply single static assignment (SSA) transformation to a code before the analysis [24].
3.2 DSA Algorithm
As discussed above, the combination of context sensitivity and field sensitivity is critical for generating a precise CFG that can stop the attack described in Section 2. We next present the results of using the DSA algorithm [33] to generate a CFG for Apache HTTPD. We chose the DSA algorithm because, to the best of our knowledge, it is the only analysis that 1) is context-sensitive and field-sensitive, 2) can scale to server applications like Apache HTTPD and Nginx, and 3) is publicly available.
The DSA algorithm is available as a submodule of the LLVM project [3] and is well maintained by the LLVM developers. It works with programs in LLVM intermediate representation (IR) generated by the LLVM Clang compiler [2]. We use Clang to compile Apache HTTPD together with the Apache Portable Runtime (APR) library [1] into a single bytecode file that contains LLVM IRs for the whole Apache HTTPD and APR library. We run the LLVM mem2reg pass (SSA transformation pass) on the bytecode file to improve the accuracy of the pointer analysis. We then construct an LLVM pass that runs the DSA algorithm and queries the DSA result to generate a CFG for the bytecode file.
Unfortunately, the DSA algorithm produces a CFG that cannot stop the attack in Section 2. Specifically, the CFG specifies that the indirect call at line 26 in Figure 4 may call to the function piped_log_spawn(). We inspected the debug log and the intermediate pointer analysis results of the DSA algorithm. We found that although as a context-sensitive and flow-sensitive analysis the DSA algorithm should theoretically be able to produce a precise CFG to stop the attack, the algorithm in practice loses context sensitivity and flow sensitivity because of convoluted C idioms and design patterns in Apache HTTPD and the APR library. As a result, it produces an imprecise CFG. Fine-grained CFI systems that disallow the calling of functions whose address is not taken can prevent the proposed attack through piped_log_spawn(). The attack can succeed, however, by targeting piped_log_spawn() indirectly through functions such as apr_open_piped_log_ex(), whose address is directly taken by the application. Next, we describe some of the sources of imprecision in more detail.
Struct Pointer Casts: We found that struct-pointer-cast operations in Apache HTTPD cause the DSA algorithm to lose field sensitivity on pointer operations. Pointer casts are heavily used at the interface boundaries of Apache components. There are in total 1027 struct pointer conversion instructions in the generated bytecode file of Apache HTTPD.
For example, pointers are cast from void* to apr_LINK_dirwalk_stat_t* at line 8 in Figure 4 when using the array container API apr_array_push(). Apache HTTPD also uses its own set of pool memory management APIs and similar pointer casts happen when a heap object crosses the memory management APIs. When the DSA algorithm detects that a memory object is not accessed in a way that matches the assumed field layout of the object, the algorithm conservatively merges all fields into a single abstract variable and loses field sensitivity on the object.
Integer to Pointer Conversion: Our analysis indicates that the Clang compiler generates an integer to pointer conversion instruction (inttoptr) in the bytecode file for the APR library function apr_atomic_casptr(), which implements an atomic pointer compare-and-swap operation.
For such inttoptr instructions, the DSA algorithm has to conservatively assume that the resulting pointer may alias to any pointers and heap objects that are accessible at the enclosing context. Although such instructions are rare (apr_atomic_casptr() is called three times in the Apache HTTPD source code), they act as sink hubs that spread imprecision due to this over-conservative aliasing assumption.
Cascading Imprecision: The struct pointer casts and integer to pointer conversions are the root sources of the imprecision. One consequence of the imprecision is that the DSA algorithm may generate artificial forward edges (calls) for indirect call sites. Although initially such artificial forward edges may not directly correspond to attack gadgets in the Apache HTTPD, they introduce artificial recursions to the call graph. Because maintaining context sensitivity for recursions is undecidable, the DSA algorithm has to conservatively give up context sensitivity for the function calls between functions inside a recursive cycle (even they are artificially
ADT performs the following steps (illustrated in Figure 5):
1. Reach ICS: ADT instruments program execution, using the GDB framework, with the ability to perform reverse execution analysis once program execution reaches a candidate ICS location. Specifically, ADT adds a breakpoint which enables the process recording functionality at the entry to the function enclosing the ICS location.
2. Backward Dataflow Analysis: Once execution reaches the ICS location, ADT performs a backward reaching-definition dataflow analysis (see Section 4.2) from the registers containing the target function address and its arguments to the memory locations that hold their values.
3. Determine Last Write IP: Next, ADT needs to identify program locations that can be used to corrupt the ICS function pointer and its values. To do this, ADT restarts the debugger and instruments the memory addresses, identified in the previous step, to record the code locations (i.e., the instruction pointer) that perform memory writes to these locations. To differentiate memory writes that occur in loops, ADT maintains a write counter. Using this information, ADT can determine the ideal program location to corrupt the ICS target and its arguments such as to minimize possible interference.
4. Corrupt Function Pointers and Arguments: At this point, ADT is able to restart the debugger and halt the program at the ideal point identified in the previous step. Then ADT redirects the ICS function pointer and its arguments to the target function. Additionally, by tracking every statement executed until the target ICS is reached, a lower bound of the liveness of the ACICS can be reported.
The liveness of an ACICS allows us to reason about its exploitability; if the liveness persists across the program lifecycle, the ICS can be attacked by almost any memory read/write vulnerability, regardless of where it occurs temporarily. On the other hand, an ACICS whose liveness is contained in a single function is significantly less exploitable.
4. ACICS DISCOVERY TOOL
We next discuss how to automate the discovery of ACICS gadgets using the ACICS Discovery Tool (ADT). To help discover candidate ICS/target function pairs (ACICS gadgets), ADT dynamically instruments applications using the GDB 7.0+ reverse debugging framework. For each candidate ACICS gadget, ADT runs a backward data-flow analysis that discovers the location of the ICS function pointer (and its arguments) in memory. Once a candidate pair is identified, ADT automatically corrupts the forward edge pointer and its arguments to verify that remote code execution can be achieved. Below, we describe ADT’s approach in detail.
4.1 Approach
As input, ADT takes a target program, a list of candidate indirect call sites (ICS), sample inputs that exercise the desired program functionality (and the list of ICS), and the address of a candidate target function inside the target program. For each ICS location, ADT performs the following steps (illustrated in Figure 5):
1. Reach ICS: ADT instruments program execution, using the GDB framework, with the ability to perform reverse execution analysis once program execution reaches a candidate
Input : The target ICS instruction icsinst.
Input : Prev, a function that returns the previous instruction (or NULL if not available) before a given instruction.
Output: The memory address that stores the call target or NULL if failed
1. if icsinst is the form of “call REG[i]” then
2. \[ r \leftarrow i \]
3. else
4. return NULL
5. inst \leftarrow \text{Prev}(icsinst)
6. while inst \neq \text{NULL} do
7. if inst modifies REG[r] then
8. if inst is the form of “REG[r] = a * REG[i] + c” then
9. \[ r \leftarrow i \]
10. if inst is the form of “REG[r] = *(a * REG[i] + c)” then
11. return a \times reg0(s) + b
12. return Prev(inst)
13. return NULL
Figure 6: Backward dataflow analysis to identify the target address
One way to improve the security of fine-grained CFI is to generate CFGs using pointer analysis algorithms that relax soundness guarantees. Unsound pointer analysis can avoid such overapproximations and generate restrictive CFGs that may stop attacks based on ACICS gadgets. One consequence of applying unsound analysis, however, is that a restrictive CFG may cause undesirable false positives that interfere with legitimate program operation.
Our experiments show that developers adopt design patterns that improve modularity and maintainability at the cost of adding program analysis complexity. One way to improve pointer-analysis precision, is to rely on programmers to provide annotations that help the underlying analysis navigate hard-to-analyze code segments. One promising research direction is the design of an annotation system that improves the underlying pointer analysis with minimal developer involvement.
3.3 Unsound Analysis with Annotations
To maintain soundness guarantees, existing pointer analysis algorithms conservatively over-approximate results. For example, sound pointer analysis algorithms conservatively assume that two pointers may alias or an indirect call site may call a function when analyzing hard-to-analyze C idioms or code design patterns.
One way to improve the security of fine-grained CFI is to generate CFGs using pointer analysis algorithms that relax soundness guarantees. Unsound pointer analysis can avoid such overconservative assumptions and generate restrictive CFGs that may stop attacks based on ACICS gadgets. One consequence of applying unsound analysis, however, is that a restrictive CFG may cause undesirable false positives that interfere with legitimate program operation.
One promising research direction is the design of an annotation system that improves the underlying pointer analysis with minimal developer involvement.
recursive due to the analysis imprecision). This loss of context sensitivity further introduces imprecision in field sensitivity because of type mismatch via unrealizable information propagation paths.
In our Apache HTTPD example, this cascading effect continues until the DSA algorithm reaches an (imprecise) fix-point on the analysis results. As a result, 51.3% of the abstract struct objects the DSA algorithm tracks are merged into single abstract variables (i.e., the loss of field sensitivity); we observed a phenomenal artificial recursion cycle that contains 110 functions (i.e., due to the loss of context sensitivity). Some of this imprecision may be attributed to changes in LLVM IR metadata since version 1.9. Previous versions relied on type annotations that used to persist from the llvm-gcc front-end into the LLVM IR metadata that are no longer available. LLVM DSA prior to version 1.9 used a set of type-based heuristics to improve the accuracy of the analysis. Aggressive use of type-based heuristics is unsound and could introduce false negatives (opening up another possible set of attacks).
4.2 Backward Dataflow
Figure 6 presents ADT’s backward dataflow analysis algorithm. The goal of this analysis is to perform a backward reaching definition analysis from the register values that hold the target function and its arguments to corruptible memory locations. For example, in Figure 5, the dataflow algorithm called on input \( x \) would produce the address of \( r->\text{handler} \). This is done by iteratively stepping back in time (reverse debugging) and examining any instruction that modifies the register which originally contained the function pointer. We assume that the instructions involved in the dataflow of the target function can be represented as the composition of linear functions and dereferences, and report a dataflow error if this does not hold. Once a function which dereferences a memory location is discovered, linear function models are used to compute the source address of the forward edge.
ADT contains several additional checks, such as an assertion to ensure that the forward edge pointer value at the ICS matches the value observed at the computed source memory address which is the output of the backward dataflow procedure. The typical use case discovered by the backward analysis is the lookup of a member element from a struct pointer; such as \( x->y \); additional levels of indirection such as \( x->y->z \) are currently not supported.
4.3 Discussion
ADT was not designed to discover all possible ACICS gadgets but rather as a tool to facilitate the construction of proof-of-concept exploits. Specifically, ADT under-reports the number of ACICS gadgets for the following reasons. First, the backward dataflow analysis does not support multi-level argument redirection. Second, ADT assumes deterministic execution; non-deterministic behavior will result in under-reporting ACICS gadgets (i.e., indirect calls/jumps), and a shadow stack to protect the backward-edge (i.e., returns). For the forward edge, the CFG is constructed using the state-of-the-art DSA [33] pointer analysis algorithm. To protect the backward edge, we assume a shadow stack implementation.
For each exploit, we evaluate the availability of ACICS gadgets by measuring 1) the number of suitable indirect call sites and 2) the number of target functions that can be used together to launch remote code execution attacks.
5. EVALUATION
We evaluate Control Jujutsu using two proof-of-concept exploits against two popular web servers Apache and Nginx. We assume that the servers are protected using fine-grained CFI (unlimited tags), to enforce only intended control transfers on the forward-edge (i.e., indirect calls/jumps), and a shadow stack to protect the backward-edge (i.e., returns). For the forward edge, the CFG is constructed using the state-of-the-art DSA [33] pointer analysis algorithm. To protect the backward edge, we assume a shadow stack implementation.
For each exploit, we evaluate the availability of ACICS gadgets by measuring 1) the number of suitable indirect call sites and 2) the number of target functions that can be used together to launch remote code execution attacks.
5.1 Apache HTTPD 2.4.12
5.1.1 Suitable ICS:
Our evaluation of the unoptimized Apache binary shows that the server contains 172 indirect call sites (ICS). We limit our evaluation to the core binary and omit reporting potential ICS target in other Apache modules, such as the Apache Portable Runtime (APR) and APR-util libraries. From these 172 sites, we want to find a subset of sites 1) which are exercised when the program processes a request and 2) whose forward edge pointer and arguments can be successfully corrupted by our ADT tool without crashing the program.
Table 1 presents the classification results of ICS exercised during different execution stages of Apache. In order to detect whether an ICS is exercised during the HTTP GET request life cycle or the startup, we vary when the test script is called in our tool. Our results not a problem. Software engineering techniques such as refactoring make this less of a problem in large, well engineered software.
show that there are 20 sites exercised during an HTTP GET request life cycle and 45 sites exercised during startup. Note that some of sites exercised during startup are also exercised by an HTTP GET request.
We use our ADT tool to detect the location of the forward edge pointer and arguments of each of the exercised 51 ICS and to corrupt these values. Table 2 presents our experimental results. Of the 51 ICS that are exercised dynamically in our experiments, our tool successfully corrupt forward edge pointers for 34 ICS. For 3 ICS our tool successfully corrupted both the forward edge pointers and the arguments.
Code patterns inside Apache facilitate our attack. We discovered that 108 of the total 172 ICS listed are in the Apache binary, but generated from the APR library’s “hook” system, which allows a function to register a function pointer for a callback at a later time. For all of the ICS generated by the APR hooks, the forward edge pointers are stored inside the global struct hooks inside APR (see Section 2, Figures 1 and 4). This hook structure persists across the lifetime of the Apache worker process, which is ideal for our attack. Additionally, almost all of the hook functions have argument that are pointers to objects visible across the entire request lifecycle, such as the ubiquitous request_rec * r argument. This is also ideal for corruption purposes.
In our Apache exploit example in Section 2, we use the ICS inside ap_run_dirwalk_stat(), the function meets all of our requirements and it is exercised during every HTTP GET request. While our evaluation focuses on unoptimized binaries to facilitate the construction of our proof-of-concept attacks, we also verified that the target ACICS gadget is still present in LLVM -02 level of optimizations. We believe that optimizations such as inlining will not significantly reduce the number of available gadgets.
5.1.2 Target Functions:
We run a script that searches the Apache source code for system calls that we can use to trigger behaviors equivalent to RCE such as exec() and system(). For each function in Apache, the script measures the distance between the function and a function that contains such system calls.
Table 3 presents the results. The farther away a target function is in the CallGraph, the harder it generally is to use it in the payload. At the same time, more viable functions become available. Related work has found similar results for the Windows platform [22]. Our example Apache exploit in Section 2 uses piped_log_spawn(), which is two calls away from the system call.
5.2 Nginx 1.7.11
Our analysis for Nginx mirrors the analysis we performed for Apache source code. We used the ACICS Discovery Tool (ADT) described in Section 4 and performed manual analysis to find the most suitable indirect call site and target function to demonstrate our attack.
5.2.1 Suitable ICS:
Our analysis on the unoptimized Nginx binary shows that there are 314 ICS in Nginx. We run our ADT tool on each of the 314 ICS in a way similar to our Apache experiments. Table 4 presents the classification results of ICS based on different execution stages and Table 5 presents the corruption experiment results.
Our results show that there are 36 ICS exercised during our Nginx experiments and 27 of these ICS are exercised during an HTTP GET request lifecycle after Nginx startup. Of the 36 exercised ICS, our ADT tool successfully corrupted the forward edge pointers and arguments for 4 ICS.
Table 4: Indirect Call Sites Dynamic Analysis
<table>
<thead>
<tr>
<th>Number of ICS dynamically encountered</th>
<th>36</th>
</tr>
</thead>
<tbody>
<tr>
<td>Detected forward edge pointer on the heap/global</td>
<td>7</td>
</tr>
<tr>
<td>Automatically corrupted forward edges</td>
<td>7</td>
</tr>
<tr>
<td>Automatically corrupted forward edges + arguments</td>
<td>4</td>
</tr>
</tbody>
</table>
Table 5: Automatic Corruption Analysis
We found that the ICS at core/nginx_output_chain.c:74 in ngx_output_chain() is an ideal candidate ICS for our attack. Figure 7 presents the simplified code snippet of ngx_output_chain(). The ICS is at line 27 in Figure 7. The function implements the filter chaining mechanism that is inherent to Nginx’s modular design because it gives an easy way to manipulate the output of various handlers run on the request object to generate a response.
In this function, the function pointer ctx->output_filter and arguments ctx->filter_ctx are all derived from ctx which is a ngx_output_chain_ctx struct pointer. This ctx a global object lives on the heap, so that our tool successfully corrupts all of these values.
Secondly, the argument ctx->filter_ctx is a void pointer that is written only once during the request life cycle, whereas argument i is a pointer to the head of a linked list of filters that are applied to request responses. This linked list is modified in every module that implements a filter. However with manual dataflow analysis, it is possible to modify this linked list so that the checks at lines 18, 19, and 20 of Figure 7 pass and we reach the execution of the ICS before any crash happens. Thirdly, as all response body filters are called before the response is returned to the user, we were able to remotely exercise this ICS during the request life cycle.
ngx_int_t
ngx_output_chain_ctx_t *ctx, ngx_chain_t *in)
{
...
if (ctx->in == NULL && ctx->busy == NULL) {
/* the short path for the case when the ctx->in * and ctx->busy chains are empty, the incoming * chain is empty too or has the single buf * that does not require the copy */
if (in == NULL) {
return ctx->output_filter(ctx->filter_ctx, in);
}
if (in->next == NULL && !(in->buf->in_file && in->buf->file_last && ngx_output_chain_as_is(ctx, in->buf)) {
return ctx->output_filter(ctx->filter_ctx, in);
}
} else {
ngx_output_chain_ctx_t *ctxdata = data;
if (execve(ctx->path, ctx->argv, ctx->envp) == -1) {
ngx_log_error(...) ;
}
}
}
5.2.2 Target Function:
We use a script to search Nginx source code for system calls with RCE capability. Table 6 shows the number of potential targets based on the distance in the call graph. We found that the function ngx_execute_proc() (shown in Figure 8) is an ideal target function for our proof-of-concept attack, because it executes a execve() call with passed-in arguments and it has a small arity of 2, which facilitates the type punning.
static void ngx_execute_proc(ngx_cycle_t *cycle, void *data);
{
ngx_exec_ctx_t *ctx = data;
if (execve(ctx->path, ctx->argv, ctx->envp) == -1){
ngx_log_error(...);
}
exit(1);
}
5.2.3 Proof-of-concept Attack:
Hence, we identified the ACICS gadget pair for our attack which is composed of the ICS at core/nginx_output_chain.c:74 in ngx_output_chain() (see line 27 in Figure 7) and the target function ngx_execute_proc() (see Figure 8).
We then perform the attack as follows. We corrupt ctx->output_filter to point to the target function ngx_execute_proc() and we corrupt the memory region that in points to so that when the memory region is viewed as a ngx_exec_ctx_t struct in ngx_execute_proc(), it will trigger RCE at line 6 in Figure 8. We successfully achieved RCE with our attack.
Table 6: Target Functions Count Based on CallGraph distance
<table>
<thead>
<tr>
<th>Distance</th>
<th>1 call away</th>
<th>2 calls away</th>
<th>3 calls away</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
</tbody>
</table>
Figure 7: ACICS for Nginx found in ngx_output_chain function
Figure 8: Nginx Target Function that calls execve
6. DISCUSSION
In summary, our results demonstrate that the availability of ACICS gadgets inside Apache and Nginx can be harnessed to produce with two proof-of-concept attacks. Our results also show that on all evaluated applications, the DSA algorithm loses a significant part of field sensitivity and context sensitivity that the generated CFGs are not precise enough to stop the proof-of-concept attacks. Together the results indicate the difficulty of creating a sound, precise and scalable CFG construction algorithm that can be used by fine-grained CFI to stop ACICS gadgets.
# Complete Memory Safety
Complete memory safety techniques that enforce both temporal and spatial safety properties can defend against all control hijacking attacks, including Control Jujutsu. Softbound with its CETs extensions [37] enforces complete memory safety albeit at a significant cost (up to 4x slowdown).
On the other hand, experience has shown that low overhead techniques that trade security guarantees for performance (e.g., approximate [48] or partial [5] memory safety) are eventually bypassed [16, 22, 47]. CPI [31] is a recent technique that achieves low performance overhead by providing memory safety properties for code pointers only (i.e., not data pointers). Unfortunately, it has already been shown to be bypassable [21].
Hardware support can make complete memory safety practical. Intel memory protection extensions (MPX) [29] can provide fast enforcement of memory safety checks. The Low-Fat fat pointers scheme shows that hardware-based approaches can enforce spatial memory safety at very low overhead [32]. Tagged architectures and capability-based systems such as CHERI [58] can also provide a promising direction for mitigating such attacks.
## Runtime Arity Checking
The recently published Indirect Function-Call Checks (IFCC) [55] is forward-edge enforcement variant of CFI designed for C++ programs. In addition to forward-edge enforcement, it further imposes a restriction that the arity of call sites and target functions must match. IFCC is capable of more powerful restrictions, but they limit themselves to checking arity for reasons discussed in Section 6.3.1.
IFCC may limit the number of available ACICS, but it cannot prevent the Control Jujutsu attack in general. In particular, using our ACICS discovery tool, we were able to easily expand on our original exploit for Apache and develop an additional full exploit based on an ACICS with an arity that matches its ICS with its target function. This exploit would not be detected by IFCC and is detailed in Section 6.3.1. As for Nginx, our proof-of-concept exploit for Apache and develop an additional full exploit, we were able to easily expand on our ACICS discovery tool, which shows that the DSA analysis determines that IFCC will not be able to detect it.
## Runtime Type Checking (RTC)
One way to restrict ACICS gadgets is to use a runtime type checker for C. The most precise runtime type checker would need access to the program source for type name information that is typically removed by C compilers. Although some information (e.g., the width in words of arguments) is inferrable purely from binary analysis with the use of an interpreter and runtime environment, as in the Hobbes checker [14], but the guarantees of runtime type checking are substantially weakened.
### Challenges of RTC
Unfortunately, runtime checks based on source code inference would break compatibility with a large subset of real-world code. Qualifiers such as const are routinely violated at runtime; a recent paper [18] found that for const pointers alone, each of thirteen large FreeBSD programs and libraries examined contained multiple “deconst” pointer idioms which would be broken if const had been enforced at runtime. In general, real-world programs do not always respect function pointer types at runtime, as the IFCC paper noted when they explained that their approach could support one tag per type signature, but that this “can fail for function-pointer casts.”
The callback and object-oriented programming patterns that exist in large C programs are analogous to the virtual table semantics of C++ programs. As our attack examples clearly demonstrate, these indirect call sites in C programs with higher-order constructs require protections in the same way that C++ programs need principled virtual table protection.
A telling example of these patterns is the APR library’s bucket brigade system. The bucket brigade structure, shown in Figure 9 is analogous to a C++ object. It contains members like “data” along with generic member functions that know how to read, write, and delete the data. Additionally, buckets live on the heap, so they are globally visible and thus can be corrupted in any function with a heap vulnerability.
```c
struct apr_bucket_type_t {
const char *name;
int num_func;
void (*destroy)(void *data);
...
};
struct apr_bucket {
const apr_bucket_type_t *type;
apr_size_t length;
apr_off_t start;
void *data;
void (*free)(void *e);
...
};
```
Figure 9: bucket_brigade declarations in APR-util
The structure is exercised by macros in the APR-util library such as the bucket_brigade_destroy seen in Figure 10. This macro is an ideal example of an ACICS—particularly dangerous because the function it executes and its argument are stored in a closure-like style inside the same structure. If an attacker can corrupt a bucket brigade struct which is later destroyed, an arbitrary function can be called with an arbitrary argument.
There are dozens of calls to apr_bucket_destroy and its wrapper macro apr_bucket_delete in the Apache source. We verified that the DSA analysis determines that apr_bucket_delete might call piped_log_spawn. Unlike the example in Figure 2, the arities of the ICS and the target match, which passes the arity check imposed by IFCC.
We took a particular instance and verified that the data in e was live throughout much of the request lifecycle, and that e->data and e->type->destroy could be corrupted immediately after initialization (as long as e->length was also corrupted to 0) without causing a crash before a call to apr_bucket_delete was made. In particular the function which makes the call to this ACICS is ap_get_brigade.
Patterns like this occur even more frequently in BIND, where many structs are effectively objects with a “methods” field; an ex-
```
#define apr_bucket_destroy(e)
do {
(e)->type->destroy((e)->data);
(e)->free(e);
} while (0)
```
Figure 10: bucket_brigade_destroy macro definition in APR-util
The principles solution to reducing this problem entirely would be explicit programmer annotations for any aliasing function signature. However, the effort required to annotate all programs at this level of detailed would be immense.
7. RELATED WORK
Control Flow Bending (CFB) [15] also demonstrates, independently and concurrently with our work, attacks against fine-grained CFI. To perform their proof-of-concept attacks, Control Flow Bending introduces the notion of printf-oriented programming, a form of ACICS gadgets, that can be used to perform Turing-complete computation. CFB assumes a fully-precise CFG, which we show is undecidable. CFB relies on manual analysis for attack construction and is only able to achieve remote code execution in one of their six benchmarks. Moreover, printf-oriented programming is only applicable to older versions glibc. In the newer versions, the `%n` protection prevents the printf-oriented programming attack [11]. In contrast, Control Jujutsu introduces a framework (policies and tools) that enable automatic attack construction. CFB and Control Jujutsu demonstrate attacks against fine-grained CFI are possible in theory and in practice.
The Out of Control work by Goktas et al. [22] shows that coarse-grained implementations of CFI (with or 2 or 3 tags) can be bypassed. In contrast, we show that even the fine-grained implementation of CFI with unlimited number of tags and a shadow stack using the state-of-the-art context- and field- sensitive static analysis is bypassable by a motivated attacker. Moreover, by studying the inherent limitations of scalable static analysis techniques, we show that attacks such as Control Jujutsu are hard to prevent using CFI.
Counterfeit Object Oriented-Programming (COOP) [46] is another recent attack on modern CFI defenses. COOP focuses exclusively on C++ by showing that protecting v-table pointers in large C++ programs is insufficient. Their work, like ours, focuses on showing certain design patterns that are common in sufficiently large or complex applications and are not accounted for in the design of CFI defenses. There may be some extensions of the COOP approach to C programs (particularly ones making heavy use of the patterns we described early); we leave this exploration to future work.
On the defense side, a number of recent fine-grained CFI techniques have been proposed in the literature. Forward-edge CFI [55] enforces a fine-grained CFI on forward-edge control transfers (i.e. indirect calls, but not returns). Cryptographically enforced CFI [34] enforces another form of fine-grained CFI by adding message authentication code (MAC) to control flow elements which prevents the usage of unintended control transfers in the CFG. Opaque CFI (OCFI) [36] enforces a fine-grained CFI by transforming the problem of branch target check to bounds checking (possible base and bound of allowed control transfers). Moreover, it prevents attacks on unintended CFG edges by applying code randomization. The authors of OCFI mention that it achieves resilience against information leakage (a.k.a. memory disclosure) attacks [47, 52] because the attacker can only learn about intended edges in such attacks, and not the unintended ones which were used in previous attacks against coarse-grained CFI [22]. Our attack shows that just the intended edges are enough for a successful attack.
Coarse-grained CFI efforts include the original CFI implementation [4], CCFI [61], and Bin-CFI [62] all of which are bypassed by the Out of Control attack.
Software Fault Isolation (SFI) and SFI-like techniques also implement CFI at various granularities. Native Client [8, 60], XFI [20], and WIT [5] are some of those examples.
Other randomization-based [10, 27, 28, 57] and enforcement-based defenses [9, 58] against memory corruption attacks have been proposed and studied in the literature. Due to space limitations, we do not discuss them in detail here. Interested readers can refer to the surveys in the literature for a list of these defenses [39, 53].
8. CONCLUSION
We present a new attack, Control Jujutsu, that exploits the imprecision of scalable pointer analysis to bypass fine-grained enforcement of CFI (forward and backward edge). The attack uses a new “gadget” class, Argument Corruptible Indirect Call Site (ACICS), that hijacks control flow to achieve remote code execution while still respecting control flow graphs generated using context- and field-sensitive pointer analysis.
We show that preventing Control Jujutsu by using more precise pointer analysis algorithms is difficult for real-world applications. In detail, we show that code design patterns for standard software engineering practices such as extensibility, maintainability, and modularity make precise CFG construction difficult.
Our results provide additional evidence that techniques that trade off memory safety (security) for performance are vulnerable to motivated attackers. This highlights the need for fundamental memory protection techniques such as complete memory safety and indicates that the true cost of memory protection is higher than what is typically perceived.
9. ACKNOWLEDGEMENTS
We thank the anonymous reviewers for their helpful feedback and our shepherd Hovav Shacham for his help with the camera ready version of the paper. We also thank Deokhwan Kim, Vladimir Kiriansky, and William Streilein for their support, feedback and suggestions for improving this paper. This research was supported by DARPA (Grant FA8650-11-C-7192) and the Assistant Secretary of Defense for Research & Engineering under Air Force Contract #FA8721-05-C-0002.
10. REFERENCES
[29] INTEL. Introduction to intel memory protection extensions, 2013.
[34] Mashitizadeh, A. J., Bittau, A., Mazieres, D., and Boneh, D. Cryptographically enforced control flow integrity.
[40] One, A. Smashing the stack for fun and profit. Phrack magazine 7, 49 (1996), 14–16.
[51] Sridharan, M., and Bodik, R. Refinement-based context-sensitive points-to analysis for java. In Proc. of PLDI.
|
{"Source-Url": "http://www.ll.mit.edu/mission/cybersec/publications/publication-files/full_papers/2015_10_12_OkhraviH_CCS_FP.pdf", "len_cl100k_base": 13078, "olmocr-version": "0.1.48", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 47210, "total-output-tokens": 17025, "length": "2e13", "weborganizer": {"__label__adult": 0.0004656314849853515, "__label__art_design": 0.0003306865692138672, "__label__crime_law": 0.00136566162109375, "__label__education_jobs": 0.0003440380096435547, "__label__entertainment": 6.669759750366211e-05, "__label__fashion_beauty": 0.00017595291137695312, "__label__finance_business": 0.00020778179168701172, "__label__food_dining": 0.0003056526184082031, "__label__games": 0.0009102821350097656, "__label__hardware": 0.0017261505126953125, "__label__health": 0.0004799365997314453, "__label__history": 0.0002696514129638672, "__label__home_hobbies": 0.0001093745231628418, "__label__industrial": 0.0004642009735107422, "__label__literature": 0.0002307891845703125, "__label__politics": 0.00035762786865234375, "__label__religion": 0.00044465065002441406, "__label__science_tech": 0.0284576416015625, "__label__social_life": 7.826089859008789e-05, "__label__software": 0.0084991455078125, "__label__software_dev": 0.95361328125, "__label__sports_fitness": 0.0003063678741455078, "__label__transportation": 0.00052642822265625, "__label__travel": 0.0001895427703857422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 70261, 0.02536]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 70261, 0.42485]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 70261, 0.85867]], "google_gemma-3-12b-it_contains_pii": [[0, 4710, false], [4710, 11779, null], [11779, 16524, null], [16524, 23198, null], [23198, 30470, null], [30470, 37453, null], [37453, 41553, null], [41553, 46762, null], [46762, 49702, null], [49702, 55737, null], [55737, 59455, null], [59455, 65248, null], [65248, 70261, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4710, true], [4710, 11779, null], [11779, 16524, null], [16524, 23198, null], [23198, 30470, null], [30470, 37453, null], [37453, 41553, null], [41553, 46762, null], [46762, 49702, null], [49702, 55737, null], [55737, 59455, null], [59455, 65248, null], [65248, 70261, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 70261, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 70261, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 70261, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 70261, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 70261, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 70261, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 70261, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 70261, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 70261, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 70261, null]], "pdf_page_numbers": [[0, 4710, 1], [4710, 11779, 2], [11779, 16524, 3], [16524, 23198, 4], [23198, 30470, 5], [30470, 37453, 6], [37453, 41553, 7], [41553, 46762, 8], [46762, 49702, 9], [49702, 55737, 10], [55737, 59455, 11], [59455, 65248, 12], [65248, 70261, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 70261, 0.02241]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
9e8559b0b31f77c9c46d194f7f8184485e3f00cf
|
Development and validation of a Descriptive Cognitive Model for predicting usability issues in a Low Code Development Platform
Carlos Silva¹, Joana Vieira¹, José C. Campos²,³, Rui Couto²,³, António N. Ribeiro²,³
¹Center for Computer Graphics, Guimarães, Portugal
²Department of Informatics, University of Minho, Braga, Portugal
³HASLab/INESC TEC, Braga, Portugal
Corresponding author: Rui Couto, rmscouto@gmail.com
Word count: 9468 (text) + 1010 (references)
Manuscript type: Special Section Article
PRÉCIS
This study proposes and evaluates a Descriptive Cognitive Model (DCM) for the identification of initial usability issues in a low-code development platform (LCDP). By applying the proposed DCM we were able to predict the interaction problems felt by first-time users of the LCDP.
ABSTRACT
Objective: Development and evaluation of a Descriptive Cognitive Model (DCM) for the identification of three types of usability issues in a low-code development platform (LCDP).
Background: LCDPs raise the level of abstraction of software development by freeing end-users from implementation details. An effective LCDP requires an understanding of how its users conceptualize programming. It is necessary to identify the gap between the LCDP end-users’ conceptualization of programming, and the actions required by the platform. It is also relevant to evaluate how the conceptualization of the programming tasks varies according to the end-users’ skills.
Method: DCMs are widely used in the description and analysis of the interaction between users and systems. We propose a DCM which we called PRECOG that combines task-decomposition methods with knowledge-
based descriptions and criticality analysis. This DCM was validated using empirical techniques to provide the best insight regarding the users’ interaction performance. Twenty programmers (10 experts, 10 novices) were observed using a LCDP and their interactions were analyzed according to our DCM.
**Results:** The DCM correctly identified several problems felt by first-time platform users. The patterns of issues observed were qualitatively different between groups. Experts mainly faced interaction related problems, while novices faced problems attributable to a lack of programming skills.
**Conclusion:** Applying the proposed DCM we were able to predict three types of interaction problems felt by first time users of the LCDP.
**Application:** The method is applicable when it is relevant to identify possible interaction problems, resulting from the users’ background knowledge being insufficient to guarantee a successful completion of the task at hand.
**Keywords:** End-User Development, Low-Code Development Platforms, Descriptive Cognitive Models, Usability, Human-Computer Interaction
**INTRODUCTION**
Low-code development platforms (LCDP) address the need for increased productivity in software development. By raising the abstraction level at which software is developed, they automate low-level and routine development tasks, effectively contributing to solve the problem of global shortage of professional software developers. Forrester's Low-Code Market Forecast predicts low-code platforms will reach over 15 billion US dollars in 2020 (Marvin, 2018). At the same time, they lower the entry barrier to software development. As these low-level tasks become automated, developers are not required to carry them out (or even know how to carry them out). Low-level technical details are effectively hidden by the platform. If the entry level becomes low enough, we can say these platforms become End-User Development (EUD) platforms (Fischer, Giaccardi, Ye, Sutcliffe, & Mehandjiev, 2004). At that point, no special programming skills are needed to use them. Other terms have been used to describe related concepts with varying levels of scope, such as End-User Programming (EUP), End-User Software Engineering (EUSE) and Meta-Design (see Barricelli, Cassano, Fogli, & Piccinno (2019) for a recent systematic review of the literature).
Whether considering LCDP or EUD, the users’ prior knowledge plays a relevant role in the learning and using of a platform, as it will affect the way users approach the platform (Dijkstra, 1982). In the case of LCDP, there is the double challenge of supporting users with little or no knowledge of programming, while also supporting expert programmers. Indeed, understanding individual differences and expectations, and identifying the sources of variation among different users will help this type of platforms to be more broadly adopted (Blackwell, 2017). Since low-code development platforms aim at reducing the learning burden while providing powerful tools to address a wide range of problems, a trade-off must be established between the scope of application and the learning costs of the platforms and their languages. This necessarily implies building an understanding of how different types of users approach the platforms.
Descriptive Cognitive Models (DCM) can be used to study the interaction between one interactive system and its users, in particular to analyze how the interplay between the users’ cognitive processes and the user interfaces’ design might lead to faulty interactions or use errors (Nielsen, 1994). Its applicability to reason about the act of programming has long been explored (cf. Blackwell, Petre & Church, 2019). Nevertheless, in spite of relevant Human-Computer Interaction (HCI) findings and developments since the 1980s and recent developments in both LCDP and EUP, there is still a considerable number of relevant gaps in current knowledge about how people reason during programming and development tasks (Sajaniemi, 2008). According to Myers, Pane, and Ko (2004), conventional programming languages require the user or programmer to make “tremendous transformations” (pp.48) from what he or she intends to accomplish, to what he or she should code. Visual modelling languages, typically adopted by low-code development platforms, aim to mitigate this problem, but their actual effectiveness is still subject to debate.
The distance between the mental and the physical spaces in software development was the motivation behind the current work. More specifically, the long-term goal of this work is to support lowering the learning curve of a specific LCDP to the point that non-programmers (i.e., end-users) might use it to develop software (in practice, turning it into an EUD platform). The challenge then, is how to reduce the learning effort of users without reducing the scope of the possible application domains. As a contribution to this long-term goal, the work described in this paper aimed at
understanding the difficulties faced by potential programmers with different expectations and academic backgrounds when using a specific LCDP. To achieve this, we developed a new descriptive cognitive model with the purpose of predicting usability issues in a LCDP.
**THE LCDP – Low-Code Development Platform**
A low-code development platform supports the development of software applications resorting to minimal code writing. Its objective is to empower different kinds of users, by allowing them to easily and quickly create applications: experienced users (e.g., programmers) are able to create software by writing considerably less code, while users without prior experience will require less formal training to start creating applications.
Due to non-disclosure agreement conditions, we are not authorized to name the LCDP under study, and for that reason it will henceforth be referred to simply as the LCDP. The LCDP under study allows developers to create both full stack web applications, and mobile applications. It provides a set of predefined templates to bootstrap the development process, which creates the base application. Developers can then expand the application on top of that. The development process itself is performed by resorting to high level development languages, mainly visual languages, similar to Unified Modeling Language (UML) diagrams (Fowler, M., & Kobryn, 2004). The platform also allows developers to graphically edit the interfaces and automatically generate pages and components (e.g., through drag and drop interactions). With this LCDP, it is possible to develop enterprise-grade level applications thanks to the integration mechanisms provided, for instance, with web services, databases or external systems (e.g., SAP).
Different languages with different abstraction levels are provided to define different components of the system. The definition of some aspects of the system, such as navigation between screens, the behavior of the screens and buttons, is done through a statechart-like language, as they are adequate for control-flow modeling. These diagrams have a simple syntax, which has the objective of being easily understood by a large audience. Some more complex aspects, such as data retrieval from a database, resort to a Domain Specific Language (DSL), which is more powerful, but simultaneously more complex. The platform also takes advantage of widely known formats, such as
spreadsheets, in order to speed up the development process. This empowers those users who are non-
experienced developers, but have had previous contact with these technologies, to more easily
understand the platform. Once finished, the applications are converted into standard technologies
(familiar web, back-end and mobile languages), and deployed into a cloud environment. The
applications become immediately available once published. Examples of this type of platforms
include Appian, Google App Maker, Microsoft PowerApps, MIT App Inventor, Nintex Workflow
Cloud, OutSystems, Sysdev Kalipso or Zoho Creator.
**Descriptive Cognitive Models**
At the dawn of HCI as an independent discipline, Richard Young wrote that “*for an interactive
device to be satisfactory, its intended users must be able to form a ‘conceptual model’ of the device
which can guide their actions and help them interpret its behavior*” (Young, 1981). Since then, it is
commonly agreed that knowledge about how users perceive and interact with a computerized
environment is of the foremost importance in the design of computer systems that emphasize
usefulness and usability (Silva, 2013). The development process of an interactive system greatly
benefits from putting the human, the user, in a central position during discussion and design (Dix,
Finlay, Abowd, & Beale, 2004; ISO, 2010). In order to better understand how the user conceptualizes
and interacts with a system, the discipline of HCI often resorts to models.
Descriptive Cognitive Models (DCMs) are widely used in the study and development of
interfaces. Their analytical processes have since long been applied by experts, analysts and developers
in order to obtain insight on how the interaction flow, the design features, or the information content
of an interface might lead to performance deficits, faulty interactions or use errors (Nielsen, 1994).
Although a comprehensive set of DCMs have been developed since the 1980s, it is usually a
combination of different models tailored to a specific application case that provides the best result
(i.e., insights on how users are thinking about the system and the interaction process). For the purpose
of predicting usability issues in LCDP we have selected task decomposition models for a recursive
decomposition of our main task into sub-tasks; knowledge-based analysis to comprehend the user’s
knowledge about the objects and actions involved in a given task; and risk assessment to analyze and
evaluate the risk associated with the identified issues.
**Task Decomposition**
The process of describing the interaction process is often referred to as *Task Analysis* and
consists of detailed descriptions and analysis of how people perform their jobs or tasks. It details what
they do, what they act on and what they need to know. Identifying the elements and the goals of the
task is an essential step to examine the skills necessary to perform a given job. Task decomposition
can be performed either in the design phase of a new system or to suggest changes in an existing
system.
**Hierarchical Task Analysis (HTA)**
The purpose of HTA is to decompose a task into all its sub-tasks in a way that displays the
hierarchical relation between them. It is one of the most predominant examples of a task
decomposition methodology. The outputs of HTA are a hierarchy of tasks and sub-tasks, together
with plans describing in what order and under what conditions sub-tasks are performed. For examples
and further details please see Dix et al. (2004) and, for a review on different ways of presenting an
HTA and a proposal on an updated notation, see Huddlestone and Stanton (2016).
**Knowledge-based analysis**
The aim of a Knowledge-based approach to task analysis “is to understand the knowledge
needed to perform a task” (Dix et al., 2004). The main goal of this type of analysis is to build general
knowledge taxonomies for each task, after listing all objects and actions. Programming is a
knowledge-based activity, and for the purpose of this study we will focus on analyses designed to
predict difficulties from interface specifications, namely the External-Internal Task Mapping
Analysis.
**External-Internal Task Mapping Analysis (ETIT)**
The ETIT model attempts to deal with the mismatch between the way the user thinks, and the way a system is designed, stating that this mismatch continues until the user learns how to translate what he or she wants to do into the system’s terms. ETIT is a contribution of one of the most seminal authors in HCI, Thomas P. Moran (Moran, 1983). In his paper, Moran points out the need for the users to map between the task they are performing and their conceptual model of the machine. Thus, ETIT was conceptualized as a way of assessing the (1) complexity of learning of a naïve user or (2) transfer of knowledge between different systems. In the first case, which is the focus of the current work, ETIT assumes two different spaces: 1) the external task space (i.e., the naïve user’s mental model of the task) and 2) the internal task space (the system’s commands that allow it to perform the task). The relation between both spaces will be an indicator of the difficulty found in learning how to use the system. According to Moran (1983), when people start using a system, they know they must convert the tasks they have to perform into the system’s language and concepts, i.e., they must learn to translate what they want to do into the system’s terms. In this model, this translation is represented by mapping rules.
The ETIT analysis has three parts:
1. An external task space (concepts and tasks described in those concepts) – whenever a person needs to perform a task, this task is formulated in the external domain/real world, not in the system’s terms (Moran, 1983). This means that people formulate their own mental model of the task, using their own known concepts, words and logic;
2. An internal task space (concepts and tasks described in those concepts) – the system’s commands and interaction flow that allows the user to perform the task;
3. A mapping from the external task space to the internal task space.
While the external space is rich and diverse, systems are not. Systems usually abstract a small set of primitive concepts, converting the external task space into smaller internal task spaces. Particularly relevant here is that, while ETIT was conceived as a tool for system design, it can also
be used as a competence model of the user, because it makes it explicit what is the knowledge necessary to execute a task.
Risk Assessment
The term “risk assessment” is usually connected to occupational health and safety but in this case, the methodology will be applied to the identification of usability and interaction issues that have the potential to compromise the application under development. The aim of such a process is firstly to identify the issue, and then to mitigate its consequences by adding control measures (Amir-Heidari & Ebrahimzadieh, 2015). The steps to be applied in our descriptive cognitive model consist of:
- **hazard identification** - finding, listing and characterizing issues
- **risk analysis** - determining the likelihood of the issue
- **risk evaluation** - comparing an estimated issue against predetermined risk criteria to determine the significance or criticality of the issue
After performing a risk assessment, a remedy analysis can be performed where error reduction strategies are defined.
Proposed model - PRECOG: low-code development Platform descRiptivE COGnitive model
The descriptive cognitive model we propose was named PRECOG – low-code development Platform descRiptivE COGnitive model (Figure 1). From an analysis of the summarized methods, it became clear that these might interplay to account for an informative Descriptive Cognitive Model. In order to apply the proposed model, the analyst should start by performing an HTA of the particular development use case under analysis. The output of the HTA will provide a list of sub-tasks of the use case that need to be further analyzed. This will be done according to an adapted version of the ETIT. In this adapted version, sub-tasks will be described from the perspective of a naïve user's mental
model, using its own terms and concepts (External task space, henceforth the Knowledge-Based Description - KBD), and from the perspective of an internal task space (henceforth System-Based Description - SBD) which details the steps needed in the LCDP, in order to accomplish the sub-task.
The KBD relies on user’s data prior to any interaction with the system. With only a brief description of each task, participants should describe how they would reach each task’s final goal using their current knowledge and familiar development tools. This information provides the analysts with the participant’s mental model of the tasks in hand, before interaction with the system under evaluation.
The SBD, on the other hand, is a step-by-step description of the actions performed by an expert user of the system. For the purpose of comparability, the evaluator should define the same decomposition stop-condition for both descriptions. One fitting criterion for a LCDP HTA stop-condition is a discernable user interaction capable of being recorded by the platform (e.g., a drag-and-drop action; the selection of an item from a drop-down menu; the establishment of a new connection in a state-chart; the writing of an expression to define a condition). User interactions at a more atomic level, such as mouse movement, hovering, or typing of a specific character, do not need to be specified in HTAs for LCDP.
Mapping rules should be established between the naive user’s mental model (KBD) and the system-based description (SBD), in order to identify possible conflicts. According to our predictions, three types of conflicts might be uncovered by looking at the mapping between the two spaces:
1. **Under decomposition conflict** - occurs when a procedure that is considered by the user as a single step in the knowledge-based description requires multiple steps in the system-based description. This type of conflicts might lead to an underestimation of the sub-task’s complexity.
2. **Over decomposition conflict** - occurs when a procedure that is considered by the user as having multiple steps in the knowledge-based description requires only one step in the system-based description. This type of conflicts might lead to an overestimation of the sub-task’s complexity and failure to identify and take shortcuts during the development task.
3. **No Correspondence conflict** - occurs when there is no link between a step in the knowledge-based description and one (or several) steps in the system-based description. This might occur because the user is not aware of the appropriate steps required by the LCDP or because the system does not include a feature representative of a mental step the user thinks is needed.
---
**Figure 1** – The proposed methodology applied to a given subtask of the HTA. Red triangles signal identified issues that occurred during interaction and the numbers inside correspond to the type of error (1 - Under decomposition; 2 - Over decomposition; 3 - No correspondence). Triangles are colored grey if, after empirical tests with a user, the analyst finds that the predicted error did not occur (false positive).
Once one of these types of conflicts is found, a risk analysis of the conflict should be made. This risk analysis is usually performed in safety critical scenarios, which is not the context of the current study. Nevertheless, for the purpose of providing a richer PRECOG analysis, the risk analysis step can help define priorities in the continuous improvement of the system.
Figure 1 summarizes the steps of the overall analysis process. First, a development task is selected (in this case, create a user interface to list and search for books) and two branches originate from this task. On the top-right hand side of the figure (Step 1.2), an example HTA is presented for the development task. The HTA will generate the System-Based Description (SBD) or Internal Task Space for the selected task. On the left (Step 1.1), the Knowledge-Based Description (KBD) or External Task Space consists in a description made by people who have not interacted with the low-code system, listing how they would expect to perform the task at hand, knowing what they know at the moment. The second step (Step 2 in the figure) consists in comparing the KBD with the SBD per participant and identifying conflicts that might occur in the interaction using the conflict taxonomy described above ((1) under decomposition, (2) over decomposition, and (3) no correspondence). In this example, four issues arise and are represented by the four triangles. The triangles are red if that issue eventually occurred during interaction, and are marked as grey if they did not occur, being classified as a false positive. Under decomposition and over decomposition markers are placed inside the box, and no correspondence markers are placed on the leftmost border of the boxes.
The third and last step in Figure 1 consists of a risk analysis of the identified issues. The risk analysis of a conflict includes:
- **Frequency** - An ordinal scale from 0 (never) to 5 (frequent)
- **Criticality** - All issues are evaluated regarding the level of criticality as described in Figure 2, ranging from 2 (not a problem) to 10 (serious) (Figure 2).
- **Pure Risk** - The Pure Risk value of the error is a single value representing the weight that should be attributed to an error, and it results from the intersection between a value of Frequency with a value of Criticality from the matrix of Frequency with Criticality adapted from Amir-Heidari and
Ebrahemzadih (2015) (Figure 3). The value ranges from 0 to 50, and the higher the value, the more critical should the issue be considered, with a higher priority for intervention.
**Figure 2.** Descriptors for each level of Criticality, adapted to the analyzed use-cases
<table>
<thead>
<tr>
<th>Criticality</th>
<th>10</th>
<th>8</th>
<th>6</th>
<th>4</th>
<th>2</th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<td>Serious</td>
<td>10</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Relevant</td>
<td>8</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Marginal</td>
<td>6</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Insignificant</td>
<td>4</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Not a problem</td>
<td>2</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Frequency (N)</th>
<th>5</th>
<th>4</th>
<th>3</th>
<th>2</th>
<th>1</th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<td>Frequent 8 to 10 occurrences</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Probable 7 to 8 occurrences</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Occasional 5 to 6 occurrences</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Remote 3 to 4 occurrences</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Rare 1 to 2 occurrences</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Never</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**Figure 3.** Matrix of Frequency with Criticality adapted from Amir-Heidari and Ebrahemzadih (2015). The intersection between a value of Frequency with a value of Criticality provides the Pure Risk of the issue, a single value representing the weight that should be attributed to the issue. The higher the value, the more critical should the issue be considered.
A model of this sort should be developed for each sub-task deemed relevant for analysis. This will provide relevant information about the participants’ difficulties in mapping their conceptual model of the task to the system’s operational environment.
Applications of the PRECOG model
PRECOG can have two main applications. It can (1) be used retrospectively to understand the root-cause of an issue identified through user testing, or (2) it can be used predictively to understand which potential issues are going to arise.
In the first case, the model is used to analyze and understand the problems identified during user testing. These problems can be tracked by identifying the correspondence conflicts in the previously made ETIT mapping. This helps to identify potential causes for the problems in terms of mismatches between the KBD and the SBD. The number of observations of the different errors provides additional information that is then used in the risk analysis (as the frequency of occurrence of the errors).
The second application for PRECOG is to use it as a predictive model, which can provide valuable information without the time-consuming process of data gathering with real users. This application takes advantage of the fact that the mapping of the participant’s (knowledge-based) description to the platform’s requirements provides a first prediction of the effect of the differences between the user’s knowledge and system’s requirements to achieve a given task. In this case, instead of evaluating the criticality of the issues that effectively occurred, an expert analyst walks through the LCDP and decides on the likelihood of the identified potential problems, evaluating their probability of occurrence given the platform’s design. This evaluation should consider, for instance, visual aids and widgets that are available on the platform, and which might be helpful in solving the issue under analysis. This stage of the process refines the predictive power of the model, as it eliminates false positives identified in the mapping stage. Calculating the Pure Risk of all identified errors is then done resorting to an adapted version of the matrix in Figure 4, using Probability (Never, Low, Medium, High) instead of Frequency. The end result will be a list of potential errors organized by Pure Risk evaluation. Remedy analysis of relevant issues (for instance, all Marginal, Relevant and Serious issues would be further detailed) might then be performed, resorting to either expert evaluation or empirical analysis.
Application and Validation of PRECOG in Empirical user studies
In this section, we present an application of the PRECOG model where we will detail the analytical process of constructing the HTA, the adapted ETIT analysis to map user knowledge and system-based descriptions, and finally the risk analysis of the identified potential interaction problems. The presented results correspond to the validation of the PRECOG model through empirical user studies. They allow us to understand the suitability and viability of using the selected techniques for the analysis of interaction conflicts in a low-code development platform, including the impact of the identified problems and the analysis of their root-causes.
METHOD
The first phase of the application of the PRECOG model consisted in defining, with a professional user and LCDP platform developer, a set of representative tasks which could be performed by different types of users. After all tasks were defined, this expert user performed them, and the performance was later analyzed in detail in order to obtain a HTA of each task. The HTA also provided the basis to develop the System-Based Description to be used in the model for each task.
The second phase of the application was performed after empirical user studies with 20 participants. Besides providing usability metrics of performance, these user studies allowed the authors to gather the Knowledge-Based Description of each participant, collected prior to any contact with the LCDP. The user studies complied with the American Psychological Association Code of Ethics, and an informed consent was obtained from each participant.
With both the Knowledge and System-based descriptions, it was possible to do the mapping between both, applying the PRECOG to each participant, and listing all the potential issues and mistakes that could happen during task execution.
The final phase of the validation effort was carried out after thorough video analysis of each participant’s performance and comparison between the model’s prediction and the real outcome in the user studies. Having identified all observed issues, risk analysis was applied to understand the issues’ frequency and criticality.
Participants
A total of 20 participants were recruited (Table 1). The recruited population respected the following requirements:
- 10 participants had a software-engineering background (formal education in the past or present) - denoted as **Experts**;
- 10 participants had education in social sciences, economics or finance areas - denoted as **Novices**;
- All participants were over 18 years old, proficient in English, unfamiliar with the LCDP (had never worked with it), and willing to accept the sessions to be recorded (screen and audio).
The recruitment was made via internal mailing lists at the author’s institutions as well as personal contacts. Of the 20 participants, 7 were female and 13 were male.
**Table 1 - Characterization of the participants**
<table>
<thead>
<tr>
<th></th>
<th>Novices</th>
<th>Experts</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Gender</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Female</td>
<td>6</td>
<td>1</td>
</tr>
<tr>
<td>Male</td>
<td>4</td>
<td>9</td>
</tr>
<tr>
<td><strong>Degree</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Psychology, Economics, Biochemistry, Management, Acoustics</td>
<td>Software Engineering</td>
<td></td>
</tr>
</tbody>
</table>
The recruitment phase included a questionnaire to understand the participants’ experience with programming languages. This questionnaire was custom-made, inspired by the knowledge acquired by students during an informatics degree, and was divided into three sections, each with increasing complexity in terms of computer science skills. The first section tested if the participant was familiarized with EUD tools (specifically, Spreadsheets editing software), and basic computational concepts, such as the concept of formula. The second section tested the capability of the user to understand simple software development concepts, such as interpreting and writing software, elementary data structures (e.g. arrays and binary trees), and query languages. The third level tested if the user had advanced software development skills, such as communication protocols, object
oriented concepts and software modelling. Participants should also indicate which programming languages and integrated development environment (IDE), if any, they felt comfortable using.
**Defining the System-based description**
The participants’ use-case consisted in creating a web application to manage books. The development of the application (“My Books”) was divided into five tasks, which could be performed in any order, each aimed at fulfilling one of the following requirements (in order of increasing difficulty):
1. The user of the “My Books” application should be able to list and search all books;
2. The user should be able to see and edit the details of a book;
3. The user should be able to register new books in the application;
4. The application should present the user with a homepage with two buttons:
a. One that redirects users to the list of books;
b. Another that goes to the screen that allows registering a new book;
5. When seeing the details of a book, the user should see a list of other books from the same author.
We started by developing an HTA for the overall use case, divided into 5 HTA sub-diagrams corresponding to the five different tasks of the use case. Figure 4 presents the HTA diagram for the first task, following the graphical notation of Marshall *et al.* (2003), where the main task description is at the root of the diagram (“0. List and Search all books”) and the different sub-tasks at lower levels (e.g., “1. Create book representations”).
Figure 4 - Example of one HTA diagram for the use case task “List and Search all books”.
When a (sub)task is further decomposed, a plan describing how its subtasks can be combined is detailed.
After defining the HTA for each sub-task, we defined the level of the HTA tree that better represents distinguishable tasks in the interface and we list all the units of interaction that are required to perform in the LCDP in order to complete that specific sub-task. For instance, to create a list of books with a search function the user of this particular LCDP as to perform the following actions:
1. Go to Data Menu;
2. Select Database;
3. Add Entity named “Book”;
4. Add Attributes;
5. Rename Attributes;
6. Add Book Entity to Home Screen representation;
7. Boilerplate generation of search functionality
Defining the Knowledge-based Description
In order to obtain a Knowledge-based Description, each participant was instructed to verbally describe how he or she would perform each task, both in terms of the interface and in terms of the back-end development. Participants did this with the conceptual knowledge they had and using the tools they knew (if any), they should describe how they would complete the tasks. This was requested before the participant interacted with the LCDP.
Material
The tests were performed in quiet testing rooms. Locations were equipped with a table and three chairs, a laptop computer for the participant, with the LCDP running, ActivePresenter 7 screen and audio capture software (Atomi Systems, 2019), and a video camera (the participant’s facial expressions were not captured at any time, the camera focused on the screen of the laptop as a backup measure).
Procedure
The participant was welcomed by the test moderator and the data logger, who explained that an evaluation was being carried out on how hard or how easy it was to use a particular LCDP. He or she read and signed the informed consent where more detailed information was provided. Each participant was presented with the five tasks. First, the participant was instructed to describe verbally how he or she would perform each task, both in terms of the interface and in terms of back-end development. Regarding this process, it became evident that the test moderator played a role in the success of this phase, which would be used to build the Knowledge-based Description of each participant. The moderator should prompt the participant when he or she becomes quiet, asking specifically about aspects of the application in order to gather as much information as possible.
Then, the participant was requested to carry out the tasks upon performing a tutorial. The tutorial consisted of an interactive session, where the users were put in contact with the LCDP and explained the basic concepts. The test moderator informed the participants that there were several
ways of concluding the tasks, and that they could search the internet for answers. Each participant was provided with the same instructions. The participants were given a written copy of the instructions and respective memory aides.
**Analysis**
For each subtask in the Hierarchical Task Analysis deemed relevant, we developed adapted ETIT mappings from the collected Knowledge-based Description to what was the required plan of action in the platform (the System-based Description). Figures 5 and 6 show as an example the ETIT mapping for the sub-task “List and Search all Books”, for an expert and a novice user, respectively. The end-result of the mapping exercise allows the analyst to identify what type of use-errors might occur during that sub-task.
**Figure 5** - Example of the adapted ETIT mapping for the sub-task “List and Search all Books” in an Expert user. Dashed arrows correspond to implicit steps.
**Figure 6** - Example of the adapted ETIT mapping for the sub-task “List and Search all Books” in a Novice user.
As we pointed out earlier, analysts can uncover three types of conflicts by looking at the mapping rules. Figure 5 illustrates all three types of conflicts - under decomposition in step 2 of the KBD, no correspondence in step 3, and over decomposition in step 6 of the SBD.
Identify the root-cause of issues found in empirical tests with real users
The analysis depicted in Figure 1, which shows the complete flow from the definition of the models, through the identification of a KDB-SBD conflict, to risk analysis of that conflict, was the one followed in the present work.
Each interaction video of the participants was thoroughly observed, the issues that occurred were identified and their relevance analyzed using the following steps:
- **Root Cause** – After analysis of the interaction that resulted in an error, a root cause was identified.
- **Remedy Analysis** – A recommendation for a way to avoid the error or issue was devised.
- **Evidence** – When available, evidence gathered during data collection with participants (video) was also registered.
- **Pure Risk** – The frequency and criticality of the issues was evaluated by the authors and the LCDP’s research and development team.
One complete analysis took on average 40 minutes per participant for an experienced analyst. The Frequency of an issue was defined as the frequency with which the issue under analysis was observed in the empirical tests. The scale can be adjusted depending on the size of the sample, without the need to modify our model.
To determine Criticality, all observed issues were evaluated by four authors and four professional LCDP developers regarding the level of criticality as described in Figure 2, ranging from 2 (not a problem) to 10 (serious). The evaluations were performed individually considering the general criticality of the issue, and not the particular context/participant where it happened. Inter-rater reliability was assessed using a two-way, average measures ICC (intraclass correlation). The resulting ICC was in the “fair” range, ICC=0.50 (Cicchetti, 1994), indicating that evaluators had a fair degree
of agreement. The ICC increased to 0.68, in the “good” range, when only one group (authors) was considered. This indicates different evaluation criteria from both groups, something which would be worth exploring in the future. The final Criticality value was the mode of the eight evaluations.
RESULTS
In this section we will present results on the comparison between the PRECOG’s predictions and the interaction issues observed during the empirical usability tests. Moreover, we will also address the nature of the use-errors that were predicted and verified, in terms of its root-cause, frequency, and pure-risk.
The comparison between PRECOG’s predictions and results of the empirical usability tests will be presented regarding the two profiles (expert and novice) and the three types of conflicts signalized by PRECOG (1 - Under decomposition; 2 - Over decomposition; 3 - No correspondence). In Table 2 we can see that out of a total of 135 potential interaction issues identified by PRECOG, 67 (49.6%) occurred during empirical usability tests. Moreover, of all interaction issues verified in the empirical usability tests, only 5 were not predicted by a type of conflict signalized in the PRECOG model. Table 2 summarizes the outcome of applying PRECOG in three confusion matrices, considering the studied profiles both combined and separately.
**Table 2 - Confusion Matrices for a combination of all participants and by participants’ profile (Novices and Experts).**
<table>
<thead>
<tr>
<th>Predicted Use Error</th>
<th>Total</th>
<th>Verified Use Error</th>
<th>No Use Error</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>67</td>
<td>68</td>
</tr>
<tr>
<td>Predicted No Use Error</td>
<td></td>
<td>5</td>
<td>64</td>
</tr>
<tr>
<td><strong>TOTAL</strong></td>
<td></td>
<td><strong>204</strong></td>
<td></td>
</tr>
</tbody>
</table>
In order to assess the predictive capability of our DCM we analyzed the values presented in Table 2, where it is possible to see the total number of true-positives (i.e., predicted and confirmed use-error), false-positives (i.e., predicted but unconfirmed use-error), true-negatives (i.e., predicted and confirmed inexistence of use-error), and false-negatives (i.e., not predicted but confirmed existence of use-error). An efficient predictive model aims at scoring high in both true-positives and true-negatives and low in both false-positives and false-negatives. From a confusion matrix one can calculate several complementary values to assess a classifier’s predictive capability (Powers, 2011; Tharwat, 2018), namely:
- Sensitivity - or recall is the proportion of the positive samples (i.e., verified use-errors) that were correctly classified as so. Thus, Sensitivity depends on true-positives (TP) and false-negatives (FN), which are in the same column of the confusion matrix, and can be calculated as: Sens = TP/(TP+FN)
- Specificity - or inverse recall is the proportion of negative samples (i.e., verified no use-error) that were correctly classified as so. Thus, Specificity depends on true-negatives (TN)
and false-positives (FP), which are in the same column of the confusion matrix, and can be calculated as: Spe = TN/(TN+FP)
- **Accuracy** - is defined as a ratio between the correctly classified samples to the total number of samples, and can be calculated as follows: Acc = (TP+TN)/(TP+TN+FP+FN)
- **F1-score** - also called F1-measure is the harmonic mean of sensitivity and positive predictive value (ppv = TP/(TP+FP)). The value of the F1-score ranges from zero to one, and high values indicate high classification performance. F1-scores are calculated as follow:
\[ F1 = \frac{2TP}{2TP+FP+FN} \]
- **Informedness** - also called *Youden’s index* quantifies how informed a predictor is for the specified condition, and specifies the probability that a prediction is informed in relation to the condition (versus chance) (Powers, 2011). The value of the Informedness ranges from zero, or chance-level, to one, representing a perfect predictive capability. Informedness can be calculated as follow: Inf = sensitivity + specificity - 1
Table 3 shows the PRECOG’s values obtained for these different variables, based on the confusion matrices presented in Table 2.
### Table 3 - PRECOG’s predictive metrics for All Participants and divided by user profile
<table>
<thead>
<tr>
<th></th>
<th>All Participants</th>
<th>EXPERTS</th>
<th>NOVICES</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sensitivity</td>
<td>0.93</td>
<td>0.95</td>
<td>0.90</td>
</tr>
<tr>
<td>Specificity</td>
<td>0.48</td>
<td>0.54</td>
<td>0.35</td>
</tr>
<tr>
<td>Accuracy</td>
<td>0.64</td>
<td>0.67</td>
<td>0.59</td>
</tr>
<tr>
<td>F1 Score</td>
<td>0.65</td>
<td>0.64</td>
<td>0.66</td>
</tr>
<tr>
<td>Informedness</td>
<td>0.42</td>
<td>0.49</td>
<td>0.25</td>
</tr>
</tbody>
</table>
Both Powers (2011) and Tharwat (2018) discussed the advantages and limitations of each of these metrics for classification performance. According to Powers, Sensitivity and F1-scores ignore performance in correctly handling negative examples, propagate underlying marginal prevalence and biases, and fail to account for the change level performance. Nevertheless, Tharwat makes the case that all these different metrics, being more focused (such as Sensitivity, Specificity, and F1-score) or
more general (such as Accuracy and Informedness) are useful to understand all the potentialities of a particular classifier. In the case of PRECOG, Sensitivity was generally higher than Specificity and, while Accuracy and F1-scores were fairly similar for both Experts and Novices, Informedness was the measure that varied the most regarding type of participant.
Statistical tests revealed differences between Experts and Novices concerning Specificity and Informedness. An unpaired two-samples Wilcoxon test indicated that Specificity in Experts was significantly higher than in Novices ($W = 73.5$, $p < .01$, $r = -0.59$). Similarly Informedness ($W = 81$, $p < .01$, $r = -0.72$) was also significantly higher in Experts than in Novices.

**Figure 7 - Differences PRECOG’s classification of Experts and Novices concerning Specificity and Informedness.**
Having calculated the Sensitivity and Specificity of each participant’s classification, it was possible to map the performance of PRECOG in a Receiver Operating Characteristic (ROC) plot (Figure 8). Participants are mainly mapped in the upper left-hand of the ROC space, meaning that, for the generality of the participants, PRECOG’s classification was predictive of actual behavior.
Again, it is possible to verify that data points of Expert participants are further from chance level \( (x=y) \) than data points of Novice participants.
*Figure 8 - Receiver Operating Characteristic (ROC) data point cloud representing Experts (points) and Novices (triangles). The diagonal line represents the chance level.*
Looking at the distribution of True Positives and False Positives according to profile and considering the three types of conflicts predicted by PRECOG, it is possible to observe that Under decomposition conflicts were the type of mapping conflict where PRECOG performed better. PRECOG obtained the highest difference between True Positives and False Positives in this type of conflict, having Under decomposition conflicts in Experts accounting for 17.5% of the overall correct predictions and in Novices accounting for 23.6% of the overall correct predictions. In the case of Experts, Over decomposition got the highest rate of True Positives (20%), however, this type of conflict also accounted for 17.5% of False Positives. For the Over decomposition conflicts in Novices, the reverse pattern was observed, with a higher number of False Positives (18.1%) than of True
Positives (10%). Finally, the No correspondence type of conflicts were the type of mapping conflict where PRECOG had a worse performance, with a higher number of False Positives than True Positives both in Experts (20%) and in Novices (21%).
Figure 9 - Distribution of True Positives and False Positives according to profile, considering the three types of conflicts predicted by PRECOG
Focusing on the data coming from the usability empirical studies, and considering the Frequency of the observed interaction issues, we were able to identify 10 types of issues: Input parameter; New database; Table with data; Button link; New screen; Query; Search function; Relate data; Change homepage; Details. Figure 10 depicts the Frequency with which each of these issues occurred in both profiles. It is possible to observe that the same issues did not occur for both profiles. As a first characterization, it can be observed that Experts had a more diverse typology of issues, having experienced nine while Novices had six different types of issues. No participant had any “Button Link” related issues. On the Novice side, they did not have issues related to “Table with
data”, “Query” and “Relate data”. For the Experts, the issue with the highest True Positives was
“Query” with 8 correctly identified issues. The highest number of False Positives among Experts
were found in the “New Screen” and “Change homepage” issues, where 7 identified issues were
considered False Positives. Regarding Novices, “Input Parameter” and “New database” issues had 7
True Positives followed by “New Screen” with 6 True Positives. Concerning False Positives, “Search
function” with 7 False positives was followed by “Details” with 6.
Figure 10 - Frequency of True and False positive issues identified in the empirical tests after KBD-
SBD mapping
Besides Frequency, another component for the Pure Risk analysis is the Criticality of the issues.
Table 4 presents the final Criticality attributed to the issues that emerged during our analysis.
Table 4 - Criticality of the identified issues by a pool of eight evaluators
Figure 11 depicts all ten different issues observed during the interaction with the LCDP. The issues are presented in terms of the Pure Risk evaluation (\( \text{min}=0, \text{max}=50 \)). Besides Criticality, the Pure Risk evaluation considers the Frequency of occurrence of each issue, hence the difference in the value of the same issue for both profiles.
**Figure 11 - Pure Risk of each observed issue per profile. Issues found by the LCDP Descriptive Cognitive Model according to their Pure Risk evaluation (\( \text{min}=0, \text{max}=50 \))**
This data indicates that, for instance, the “Query” issue should be solved, as almost all Experts who interacted with the LCDP had issues with creating a query. It also indicates that the weight of each issue is different depending on the LCDP user. For Novices, the highest Pure Risk value is in the “Input Parameter”, followed by “New Database” and “New Screen” issues. The only Pure Risk value similar to both profiles occurred in the “Search Function” issue, indicating similar difficulties in this task for both profiles.
**DISCUSSION**
We have successfully applied our descriptive cognitive model (PRECOG) to a relevant use case, which allowed us to validate the viability of the proposed approach.
**Predictive power**
Although time-consuming, the methodology proved to correctly identify several high-criticality issues from both user profiles. Indeed, we were able to use the model to predict a relevant number of problems, prior to the user studies that confirmed them. The confirmation came with a relatively high number of False Positives. Such is due to the conservative nature of the model, as highlighted by the fact that results show a high Sensitivity, with a comparatively lower Specificity. We opted for an exhaustive approach, and no relevant error was disregarded in the KBD-SBD mapping analysis, since such a stance allowed a more extensive list of expected problems. Despite this conservative approach, the majority of the predicted issues or root causes occurred during empirical user tests (all were observed except for the “Button Link” issue). This is a relevant output as very little unpredicted issues occurred during the user studies and these were mostly related to navigational (interaction with the platform) difficulties and not conceptual mismatches.
Another explanation for the high number of False Positives predicted by PRECOG concerns “No correspondence” issues, where one item from the KBD or the SBD finds no correspondent on the other side. Some issues of this type may have arisen because the participant did not mention a certain development step or minor detail - which ended up being evident when interacting with the LCDP, thus not resulting in a real issue. This is particularly evident in the case of Novice...
programmers, leading to a significantly lower Specificity in their case, and can be attributed to the
fact that their mental models of the programming tasks differ from the platform’s model more than
the Experts’ mental model. That some of the problems were not observed is, in effect, a positive
indicator towards the ability of the LCDP to guide users during the learning stage. Conversely, “Over
decomposition” was the type of issue with more false positives, which seems to indicate a need to
raise the abstraction level of the platform. Doing this without compromising the ability of advanced
users to fine-tune applications that are more complex, implies exploring strategies for adaptive user
support (Oppermann, 1994; Gajos, Czerwinski, Tan & Weld, 2006). An important observation should
be made regarding moderation of the Knowledge-Based Description stage. The moderator should be
very familiar with the platform and the tasks under study in order to prompt the necessary information
to complete the mapping. It is extremely important to obtain all the information that will allow
illustrating the participant’s mental model before interaction with the platform.
Another significant result concerns the Criticality of the issues observed. All issues which
occurred had a Criticality evaluation of more than 6, that is, were either Marginal (Mistakes due to
unmatched expectations, eventually solved through exploration/help), or Relevant (Continuously
affects user’s understanding of the development platform and actions). This is an indicator that our
approach is useful in detecting high-criticality issues. Unfortunately, and this is a limitation of the
current study, no Criticality evaluation was performed in the non-observed issues in order to compare
the criticality between the observed and unobserved issues.
Regarding the number and type of issues, the applied model allowed us to distinguish two
patterns between Novice and Expert programmers. Expert programmers had more types of issues (9
in total), but each happening less frequently, depicting a more exploratory behaviour, Novice
programmers had less types of issues (6 in total) but each with more repetitions, meaning that the
issues were effectively problematic for this profile, which explored less and whose participants had
very similar performances. Another potential explanation for this difference in number of observed
issues results from the speed with which each profile performed the tasks. Expert programmers
managed to complete more tasks than Novice programmers, hence there was more opportunity for
issues to arise. These different patterns affected the Pure Risk evaluation of the issues according to
profiles. Even though the Criticality was the same for all issues independently of the profile, the Frequency of the issue affected the Pure Risk value. The issues found in Novice programmers were consistent and robust, and this is even more interesting if we consider their heterogeneous background education. The Pure Risk evaluation should work as an order of priorities in order to improve the LCDP under study. Indeed, the LCDP development team has since addressed some of the issues found.
In global terms, the combined results show PRECOG is somewhat a conservative model, which despite eliciting some False Positives also identified correctly the majority of issues that effectively impaired the participants’ progression. It should be noted that we are considering first-time users of the platform without any formal training.
As it is the case for safety-critical systems, when analyzing applications such as LCDPs, it is safer to overestimate the occurrence of potential for use-error, than fail to identify serious use-errors that occurred. In this regard, PRECOG appears to be a promising approach in the sense that it was able to identify almost all issues faced by Expert and Novices programmers.
From the application of this LCDP Descriptive Cognitive Model, and as highlighted by the ROC analysis, we conclude that the method has predictive power (i.e., most of the identified knowledge-system conflicts were resulting in use issues during the performance of the task), and that this methodology could be used as an effective tool to predict, understand, and mitigate use errors and faulty interactions in an LCDP platform.
Model applicability
Regarding the applicability of PRECOG, the descriptive cognitive model itself was tailored to a specific set of applications, and for the LCDP it proved useful, granular and precise. In theory, the model is not limited to LCDP platforms. We address applications where the user tasks can be decomposed through an HTA, so that the KBD-SBD mapping is possible. That said, we are only able to support our claims in the low-code development area, due to the performed user studies. As for the type of participants, the model assumes naïve participants or first-time users, but maybe the observed granularity and detail of the predictions would allow its application with participants with some knowledge of the platform under study (i.e., testing tasks that the participants never performed in that
This would be an interesting application for the future, as we expect the model to identify the missing knowledge independently of the proficiency of the participant.
**Threats to validity**
Having performed a user study, results are susceptible to threats to its validity. Specifically, we acknowledge the relatively limited number of participants (a total of 20). While the achieved results regarding the percentage of effective issues vs. identified ones are positive, the impact of a larger study group should be analyzed.
Would this model work without empirical studies? We believe it would, and in the section where PRECOG is presented, we provide the necessary steps to do so. We believe its predictive capability would be improved if the criticality evaluation had been performed after the KBD-SBD mapping for all identified potential issues (including false positives). Having all the criticalities of all potential issues would, for instance, allow the analysts to choose the issues evaluated with a criticality of 6 or more. These issues, according to our results, have a higher probability of being captured by PRECOG. In the future, it is our intention to validate this second approach by performing cognitive walk-throughs in the platform and performing criticality evaluations in replacement of the empirical studies with real participants.
**Value**
The approach used in the current study allowed us to identify problems, difficulties and issues participants faced during the interaction with the LCDP. Although not detailed in the present study, a root-cause analysis of each issue allowed us to understand that these arose mostly due to two types of lacking concepts: LCDP-related concepts and development-related concepts. Whereas Expert programmers knew the development concepts but had difficulty in translating them into the LCDP terms, Novice programmers lacked basic development-related concepts, which largely affected their performance.
Although these results are based on the study of one development use-case, and we cannot generalize to the entire LCDP, PRECOG allowed the identification, prioritization and root-cause analysis of several issues. This is valuable information for the LCDP platform developers as they have as main goal to place both types of profiles (Experts and Novices) within the Optimal Flow (cf.
Csikszentmihalyi, 1990; Repenning & Ioannidou, 2006), with just the right amount of challenges and just the right amount of skill-acquisition - each at their own pace. The nature of the issues also provides valuable inputs to support adjusting the LCDP learning process, according to the Optimal Flow. Specifically, the nature of the errors should be taken into account: Novice users, due to the lack of software development skills, will fall into anxiety, as they are not able to develop the desired features; Expert users, while lacking knowledge of the platform, will perform the tasks resorting to the previously acquired knowledge, which might result in repetitive and monotonous tasks, leading to boredom.
**CONCLUSION**
Low-code development platforms have the potential to dramatically change how software is developed, making it possible, at least for particular domains, for someone without a formal education in computer science to develop quality software, and for experienced developers to significantly speed up the development process. Understanding how programmers and non-programmers approach this type of platform, is key to support their design and evolution. By developing and applying PRECOG, a new Descriptive Cognitive Model (DCM), aimed at identifying interaction issues with the learning of low-code platforms, we were able to gain insights into potential problems with a specific low-code platform’s use. The proposed DCM was validated, using empirical techniques. Twenty participants were observed interacting with the LCDP, of which 10 were expert programmers and the other 10 were novice programmers. All performed the same tasks and all interactions were analyzed according to the proposed model.
Although a high number of False Positives were identified after a first mapping between the user’s mental model and the system’s requirements, it is relevant to notice that all issues but one (Button link), which occurred during users’ interaction with the LCDP, were predicted by this mapping. Expert programmers had a higher number of observed issues, although each occurring less frequently. This was due to expert programmers performing the tasks more quickly and with a more explorative behaviour, giving room for more issues to occur. On the other hand, Novice programmers faced fewer issues, although each occurred more frequently. These results allowed us to successfully
identify high criticality use errors through the analysis of the users’ mental model and, importantly, the results allowed us to identify the root causes of each issue. One of the future goals of the current research is to validate PRECOG as a predictive model without recurring to user studies.
PRECOG revealed itself quite valuable in the search for more usable LCDP and effective EUD platforms. As Maeda (2006) points out, “observing what fails to make sense to the non-expert, and then following that trail successively to the very end of the knowledge chain is the critical path to success [i.e., in developing simple and easy to learn systems]”. Our proposed method allows the systematic and effective exploration of the conflict between users’ knowledge and system requirements/challenges, thus providing important insights for system developers that aim at creating a broadly accessible development platform. Moreover, this method can be applied in other contexts where learnability might be an issue for it allows to identify possible sources of faulty interaction and sub-tasks where the users’ background knowledge will be insufficient to guarantee a successful performance of the task at hand.
**KEY POINTS**
● An effective Low-Code Development Platform (LCDP) requires an understanding of the distance between the LCDP end-users’ conceptualization of programming, and the actions required in the platform.
● We propose and evaluate a Descriptive Cognitive Model (DCM) for the identification of initial use issues in a low-code development platform.
● We propose three mapping rules for the identification of knowledge-system conflicts: over decomposition, under decomposition and no correspondence conflicts.
● Applying the proposed DCM we were able to predict the interaction problems felt by first time users of the LCDP.
**REFERENCES**
Carlos César Loureiro Silva is the Research and Development Coordinator of the Perception, Interaction and Usability group at CCG - Centro de Computação Gráfica. He holds an MSc in Experimental Psychology from the University of Minho (Portugal, 2011) and a PhD degree in Informatics from the University of Minho (Portugal, 2019).
Joana Catarina Fernandes Vieira is a Usability Analyst and Researcher at CCG - Centro de Computação Gráfica. She holds an MSc in Experimental Psychology from the University of Minho (Portugal, 2008) and is concluding a PhD in Ergonomics from the University of Lisbon.
José Francisco Creissac Freitas de Campos is an Auxiliary Professor at the Department of Informatics of the University of Minho and a Senior researcher at HASLab/INESC TEC. He holds a PhD. degree in Computer Science from the University of York (UK, 2001).
Rui Miguel Silva Couto is a Senior Researcher at the HASLab/INESC TEC & University of Minho. He holds a PhD. degree in Informatics from the University of Minho (Portugal, 2017).
António Manuel Nestor Ribeiro is an Auxiliary Professor at the Department of Informatics of the University of Minho and a Senior researcher at HASLab/INESC TEC. He holds a PhD. degree in Informatics from the University of Minho (Portugal, 2008).
|
{"Source-Url": "https://www.di.uminho.pt/~jfc/publications/Silvaetal-HumanFactors-manuscript.pdf", "len_cl100k_base": 13754, "olmocr-version": "0.1.53", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 87096, "total-output-tokens": 16798, "length": "2e13", "weborganizer": {"__label__adult": 0.00040841102600097656, "__label__art_design": 0.0008282661437988281, "__label__crime_law": 0.0002846717834472656, "__label__education_jobs": 0.006206512451171875, "__label__entertainment": 8.434057235717773e-05, "__label__fashion_beauty": 0.00022041797637939453, "__label__finance_business": 0.00033402442932128906, "__label__food_dining": 0.00037932395935058594, "__label__games": 0.0007171630859375, "__label__hardware": 0.0011997222900390625, "__label__health": 0.0005931854248046875, "__label__history": 0.0003223419189453125, "__label__home_hobbies": 0.0001748800277709961, "__label__industrial": 0.0004649162292480469, "__label__literature": 0.0004477500915527344, "__label__politics": 0.0002149343490600586, "__label__religion": 0.0005164146423339844, "__label__science_tech": 0.0253753662109375, "__label__social_life": 0.00014829635620117188, "__label__software": 0.005764007568359375, "__label__software_dev": 0.9541015625, "__label__sports_fitness": 0.0003237724304199219, "__label__transportation": 0.0006361007690429688, "__label__travel": 0.0002187490463256836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 70157, 0.01686]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 70157, 0.39047]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 70157, 0.92733]], "google_gemma-3-12b-it_contains_pii": [[0, 1666, false], [1666, 4026, null], [4026, 6669, null], [6669, 9109, null], [9109, 11498, null], [11498, 13352, null], [13352, 15576, null], [15576, 17384, null], [17384, 19728, null], [19728, 20531, null], [20531, 22947, null], [22947, 24703, null], [24703, 27001, null], [27001, 29214, null], [29214, 31148, null], [31148, 32651, null], [32651, 33457, null], [33457, 35518, null], [35518, 36552, null], [36552, 38677, null], [38677, 40495, null], [40495, 41717, null], [41717, 43922, null], [43922, 45190, null], [45190, 46391, null], [46391, 47558, null], [47558, 48495, null], [48495, 49046, null], [49046, 51311, null], [51311, 54009, null], [54009, 56469, null], [56469, 58822, null], [58822, 61231, null], [61231, 63090, null], [63090, 64655, null], [64655, 66276, null], [66276, 67995, null], [67995, 69733, null], [69733, 70157, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1666, true], [1666, 4026, null], [4026, 6669, null], [6669, 9109, null], [9109, 11498, null], [11498, 13352, null], [13352, 15576, null], [15576, 17384, null], [17384, 19728, null], [19728, 20531, null], [20531, 22947, null], [22947, 24703, null], [24703, 27001, null], [27001, 29214, null], [29214, 31148, null], [31148, 32651, null], [32651, 33457, null], [33457, 35518, null], [35518, 36552, null], [36552, 38677, null], [38677, 40495, null], [40495, 41717, null], [41717, 43922, null], [43922, 45190, null], [45190, 46391, null], [46391, 47558, null], [47558, 48495, null], [48495, 49046, null], [49046, 51311, null], [51311, 54009, null], [54009, 56469, null], [56469, 58822, null], [58822, 61231, null], [61231, 63090, null], [63090, 64655, null], [64655, 66276, null], [66276, 67995, null], [67995, 69733, null], [69733, 70157, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 70157, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 70157, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 70157, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 70157, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 70157, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 70157, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 70157, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 70157, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 70157, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 70157, null]], "pdf_page_numbers": [[0, 1666, 1], [1666, 4026, 2], [4026, 6669, 3], [6669, 9109, 4], [9109, 11498, 5], [11498, 13352, 6], [13352, 15576, 7], [15576, 17384, 8], [17384, 19728, 9], [19728, 20531, 10], [20531, 22947, 11], [22947, 24703, 12], [24703, 27001, 13], [27001, 29214, 14], [29214, 31148, 15], [31148, 32651, 16], [32651, 33457, 17], [33457, 35518, 18], [35518, 36552, 19], [36552, 38677, 20], [38677, 40495, 21], [40495, 41717, 22], [41717, 43922, 23], [43922, 45190, 24], [45190, 46391, 25], [46391, 47558, 26], [47558, 48495, 27], [48495, 49046, 28], [49046, 51311, 29], [51311, 54009, 30], [54009, 56469, 31], [56469, 58822, 32], [58822, 61231, 33], [61231, 63090, 34], [63090, 64655, 35], [64655, 66276, 36], [66276, 67995, 37], [67995, 69733, 38], [69733, 70157, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 70157, 0.0977]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
a64c66ef7706ef08089bc8ef25c133a1a055c186
|
<table>
<thead>
<tr>
<th>Title</th>
<th>Agile Practices in Use from an Innovation Assimilation Perspective: a Multiple Case Study</th>
</tr>
</thead>
<tbody>
<tr>
<td>Author(s)</td>
<td>Wang, Xiaofeng; Conboy, Kieran</td>
</tr>
<tr>
<td>Publication Date</td>
<td>2007</td>
</tr>
<tr>
<td>Item record</td>
<td><a href="http://hdl.handle.net/10379/1607">http://hdl.handle.net/10379/1607</a></td>
</tr>
</tbody>
</table>
AGILE PRACTICES IN USE FROM AN INNOVATION ASSIMILATION PERSPECTIVE: A MULTIPLE CASE STUDY
Minna Pikkarainen
Lero – The Irish Software Engineering Research Centre
Limerick, Ireland
minna.pikkarainen@lero.ie
Xiaofeng Wang
Lero – The Irish Software Engineering Research Centre
Limerick, Ireland
xiaofeng.wang@lero.ie
Kieran Conboy
National University of Ireland, Galway
Galway, Ireland
(kieran.conboy@nuigalway.ie)
Abstract
Agile methods have been adopted by many information systems development (ISD) teams and organizations in recent years. However, while agile method research is growing, many studies lack a strong theoretical and conceptual base. Innovation adoption theories provide new perspectives on analysing agile methods. This paper is based on an exploratory study of the application of innovation theory to agile practices in use, focusing in particular on the later stages of assimilation i.e. acceptance, routinization and infusion. Three case studies were conducted involving agile method projects, using semi-structured interviews. One key finding is that specific needs of the adopting teams may drive the relevant agile practices in use to a deeper level of assimilation. Another key finding indicates the period of agile use does not have a proportional effect on their assimilation stages. Therefore, one needs to be cautious when using time as a measure of agile practice assimilation.
Keywords: agile, method, systems development, practice in use, innovation adoption, assimilation stages, routinization, infusion, extreme programming, Scrum
Introduction
Agile methods represent quite a popular initiative which complements previous critiques of formalised methods (e.g. Baskerville et al., 1992), and have been well received by practitioners and academics. There is also evidence to suggest that use of agile methods has been growing rapidly since their inception (Boehm, 2002, Boehm and Turner, 2004). A number of methods are included in this family, the most notable being eXtreme Programming (XP) (Beck, 2000), Scrum (Schwaber and Beedle, 2002), the Dynamic Systems Development Method (DSDM) (Stapleton, 1997), Crystal (Cockburn, 2001), Agile Modelling (Ambler, 2002), Agile Project Management (APM) (Highsmith, 2004), Feature Driven Design (Coad and Palmer, 2002), and Lean Software Development (LSD) (Poppendieck, 2001). While the emergence of agile methods has been primarily industry and not research led, agile methods are the subject of a rapidly growing body of research activity. In addition, a large contingent of these empirical studies focus on agile methods “in action” (Fitzgerald 1997) or agile practices used in real-world contexts (Rasmusson 2003; Grenning 2001; Murru et al. 2003; Lippert et al. 2003; Fitzgerald et al. 2006). Few studies of agile methods in use, however, are based on a strong theoretical and conceptual foundation This trend is particularly symptomatic of agile method tailoring research- many such studies exist, providing a descriptive account of how an textbook agile method was implemented and modified, but there is little cumulative tradition where each author compares and contrasts their account against other previous tailoring studies. Therefore there is significant benefit to be gleaned from analysing agile method use through the lens of existing well-established theory - innovation adoption theory being one such example.
According to Rogers (1983), an innovation can be an idea or practice which is perceived as new by adopters. Based on this definition, agile practices can certainly be characterized as software process innovations. The vast majority of the proprietary agile method literature portrays these methods as new, revolutionary and innovative. The agile manifesto is highly illustrative of this, setting out key values and principles of agile methods, and in essence drawing a firm distinguishing line between these methods and the heavyweight, bureaucratic methods that went before. Theories on IT innovation adoption, consequently, may bring new insights in the study of agile practices in use. Several innovation adoption theories have been built and used to explain the mechanisms and constructs of the introduction and implementation of new innovations (Davis 1989; Cooper and Zmud 1990; Saga and Zmud 1994; Fichman 1999; Gallivan 2001). Gallivan (2001)’s work is considered particularly relevant to this study, in which six assimilation stages have been proposed based on the previous work of Cooper and Zmud (1990) and Saga and Zmud (1994). The later three stages of assimilation have particular relevance for the study of agile practices in use, i.e., acceptance, routinization and infusion. The investigation of the adoption processes of agile practices is critical, following the line of argument of Hovorka and Larsen (2006), because it provides another level of explanation, describing the importance of the agile practices in a system development and adoption setting.
The goal of this study is to explore the application of innovation adoption theories in agile research, and to obtain a better understanding of agile practices in use by so doing. The agile practices investigated in this study are from eXtreme Programming (XP) (Beck, 2000) and Scrum (Schwaber and Beedle 2002). These methods were chosen for a number of reasons, the foremost being the fact that they are by far the most widely used of the agile method family (Fitzgerald et al. 2006). Secondly, they are very diverse approaches as (XP) is very prescriptive and practitioner-oriented while Scrum is primarily a project management method (Abrahamsson et al. 2002). By studying these two methods, this piece of research ensures that lessons learned considered both perspectives. Finally, existing research has shown that XP and Scrum are often combined by teams and organisations (Fitzgerald et al. 2006), and observing this phenomenon is further motivation underpinning the choice of methods.
To achieve the goal of this study the remainder of the paper is organized as follow. The next section describes the conceptual background, starting with a description of agile practices, and a review of innovation adoption theory. An interpretation of the theories is also presented in terms of agile practices in use, and is followed by the proposed framework adopted in this research. The research approach is then justified and outlined, and this is followed by a description of the three cases. The use of the agile practices is described and the key findings are then presented. These are then discussed in light of the existing literature. The last section concludes with final remarks, limitations and suggestions for future work.
Conceptual Background
Agile Practices: Methods versus Method-in-Action
There are many books, journals and articles explaining the various agile methods in existence. Table 1 lists the practices associated with XP and Scrum, the two methods which are the focus of this study. However, while such literature is often very detailed and prescriptive there is often a substantial difference between the textbook ‘vanilla’ version of the method and the method actually enacted in practice. Fitzgerald et al (1997) refer to the latter as the “method-in-action”. Prescribed practices are constantly tailored to suit the specific needs of teams and human nature inevitably leads to diverse interpretations and implementations of a method. However, empirical research of agile practices used in real-world settings is relatively sparse, compared with laboratory-style experiments, controlled case studies or anecdotal experience reports (for some examples of empirical studies on XP in use, see Grenning (2001), Murru et al. (2003), Rasmusson (2003), Williams et al. (2003), Drobka et al. (2004) and Svensson and Host (2005); for those on Scrum in use, see Rising and Janoff 2000 and Dingsoyr et al. 2006; Fitzgerald et al. (2006) study the combined use of XP and Scrum). Few studies have attempted to understand agile practices in use with the help of appropriate theoretical lenses. Consequently, systematic and insightful understanding of agile practices in use is yet to be achieved.
<table>
<thead>
<tr>
<th>XP Practices</th>
<th>Scrum Practices</th>
<th>Explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Pair Programming</td>
<td>Sprints</td>
<td>Code is written by two programmers on the same machine.</td>
</tr>
<tr>
<td>Testing</td>
<td>Sprint Planning</td>
<td>Continually write tests, which must run flawlessly for development to proceed.</td>
</tr>
<tr>
<td>Metaphor</td>
<td>Architecture</td>
<td>Write test code before writing function code.</td>
</tr>
<tr>
<td>Collective Ownership</td>
<td>Post Game</td>
<td>Guide all development with a simple shared story of how the system works</td>
</tr>
<tr>
<td>Refactoring</td>
<td>Sessions</td>
<td>Anyone can change any code anywhere in the system at any time.</td>
</tr>
<tr>
<td>Coding Standards</td>
<td>Daily Meetings</td>
<td>Programmers restructure the system, without removing functionality, to improve code, performance, simplicity, and flexibility.</td>
</tr>
<tr>
<td>Simple Design</td>
<td></td>
<td>Adherence to coding rules which will facilitate communication through code.</td>
</tr>
<tr>
<td>Continuous Integration</td>
<td></td>
<td>The design of the system should be as simple as possible.</td>
</tr>
<tr>
<td>40-Hour Week</td>
<td></td>
<td>Integrate and build the system every time a task is completed - this may be many times per day.</td>
</tr>
<tr>
<td>On-Site Customer</td>
<td></td>
<td>Work time is generally limited to 40 hours per week.</td>
</tr>
<tr>
<td>Small Releases</td>
<td></td>
<td>Include an actual user on the team, available full-time to answer questions.</td>
</tr>
<tr>
<td>Planning Game</td>
<td></td>
<td>Put a simple system into production quickly, then release new versions on a very short cycle.</td>
</tr>
<tr>
<td>Planning Game</td>
<td></td>
<td>Prioritisation of scope for next release based on a combination of business priorities and technical estimates.</td>
</tr>
<tr>
<td></td>
<td></td>
<td>System architecture modification and high-level design regarding implementation of backlog items.</td>
</tr>
<tr>
<td>Daily Meetings</td>
<td></td>
<td>Reflect on method strengths and weaknesses after each cycle.</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Short daily status meeting.</td>
</tr>
</tbody>
</table>
Innovation Adoption Theory
Several innovation adoption studies represent the core theories, models and frameworks in this area, including Davis (1989), Fichman (1992, 1994, 2001) and Gallivan (2001). They build innovation adoption models, and measure the
adoption stage of innovations in various environments. While many of these exist, and have many commonalities, they are based on many different levels of analysis, such as individual, group and organisation.
Davis (1989)’s Technology Acceptance Model (TAM) focuses on evaluating usefulness, use and user acceptance aspects on adoption of information technologies. A conclusion of the study is that users adopt technologies primarily based on the functions they perform and secondarily on how hard or easy to use the technologies. Fichman (1994) develops measurement scales for innovation adoption using the concepts of assimilation stages from Meyer and Goes (1988). An assimilation stage can be defined as a description of how deeply an adopted innovation penetrates the adopting unit (company, group or individuals) (Fichman 1992, 2001). Gallivan (2001) suggests a six-staged model for describing innovation assimilation based on the work of Cooper and Zmud (1990) and Saga and Zmud (1994). The stages in the model are defined as follow.
- Initiation, a match is identified between an innovation and its application in the organization;
- Adoption, a decision to adopt an innovation is made in the organization;
- Adaptation, the innovation is developed, installed and maintained, and organizational members are trained to use the innovation;
- Acceptance, members are committing to using the innovation, which means that the innovation is employed in the organization;
- Routinization, usage of the innovation is encouraged as a normal activity in the organization; the innovation is no more defined as something out of ordinary;
- Infusion, the innovation is used in a comprehensive and sophisticated manner, which results in increased organizational effectiveness. Three different facets of infusion are described:
- Extensive use: using more features of the innovation;
- Integrated use: using the innovation to create new workflow linkages among tasks;
- Emergent use: using the innovation to perform tasks not in the pre-conceived scope.
Gallivan (2001) concludes that, although the adoption of innovations always requires some degree of top-down, organizational level decision, it is usually not possible to occur without a degree of bottom-up assimilation of developers. Furthermore, Gallivan (2001) argues that assimilation stages describe how deeply an innovation penetrates the adopting unit which can be company, division, workgroup, or individuals.
Zmud and Apple (1992), Gallivan (2001) and other models discussed above are based upon what Zmud and Apple (1992) call an unfreeze-refreeze model of innovation adoption which implies a sequence of the stages. In terms of the later three assimilation stages, acceptance can be seen as the end of an unfreezing phase while routinization and infusion are increasingly refreezing phases.
The Assimilation Stages of Agile Practices in Use
This section presents the interpretation of the innovation adoption concepts in terms of agile practices in use. Rogers (1983) contends that a practice perceived as new by adopters can be characterized as an innovation; therefore, agile practices can be defined as process innovations for an adopting ISD team. This study focuses on agile practices in use i.e. where initiation, adoption and adaptation have already occurred. Therefore, in the context of this study the later stages of assimilation i.e. acceptance, routinisation and infusion are more relevant. A key distinction between acceptance and routinization is whether a practice becomes a routine part of a development process, regardless how it is used, “by the book”, or in a tailored way. If a practice is seen as being routinized in a process, the three facets of infusion (i.e. extensive use, integrated use and emergent use) will be used as indicators to decide if it reaches infusion stage. This is consistent with the key assumption abovementioned, that is, the acceptance, routinization and infusion of agile practices happen in a sequential manner.
The three facets of infusion need to be re-considered to suit the study. Extensive use is to use more features of an innovation, which makes more sense if an agile method is investigated as a whole. Since the study is focused on individual agile practices, intensive use, rather than extensive use, seems more relevant and applicable here. The intensive use of a practice can be defined as the more frequent or strengthened use of the practice than suggested by
the method. The definition of integrated use also needs to be adapted. The interconnected use of an agile practice with other agile practices is regarded as an integrated use of that practice, rather than new linkages of workflow that the practice gives rise to. The definition of emergent use, instead, can be taken directly without modification.
Table 2 summarizes the interpretation of the concepts of innovation assimilation in terms of agile practices in use. The concepts, together with the key assumption, enable an appropriate analysis of the assimilation stages of agile practices in use.
<table>
<thead>
<tr>
<th>Assimilation stages</th>
<th>Conceptualization in studying agile practices in use</th>
</tr>
</thead>
<tbody>
<tr>
<td>Acceptance</td>
<td>An agile practice is used only when needed, or in some special situations.</td>
</tr>
<tr>
<td>Routinization</td>
<td>An agile practice is used regularly, as a routine part of the process.</td>
</tr>
<tr>
<td>Infusion</td>
<td>An agile practice is not only routinely used, but also in a comprehensive and sophisticated way, which is indicated by:</td>
</tr>
<tr>
<td></td>
<td>- Intensive use: an agile practice is used more frequently or more intensively than suggested by the method;</td>
</tr>
<tr>
<td></td>
<td>- Integrated use: an agile practice is used in an interconnected way with other practices;</td>
</tr>
<tr>
<td></td>
<td>- Emergent use: an agile practice is used in the areas not prescribed by the method.</td>
</tr>
<tr>
<td></td>
<td>These uses tend to be routinized as well.</td>
</tr>
</tbody>
</table>
Take pair programming as an example. It is at the acceptance stage if it is only used by the team in some specific situations, not as a routine of the development process. For instance, the team only pair on some complex tasks from time to time. If routinization is achieved, pair programming is used continuously throughout the project with little or no exception. The infusion stage of pair programming means that it is not just used continuously, but is also used in some modified manner, for example, as a way to communicate and demonstrate XP practices to team members (Rasmusson 2003), or as a way to implement peer-discipline within the team.
**Research Approach**
The objective of this study is to explore the application of innovation adoption theories in agile research, and to obtain a better understanding of agile practices in use through the perspective of later assimilation stages discussed in the previous section. Since the study is exploratory in nature, with the intention of investigating contemporary phenomenon in a real-life context, case studies are considered by the researchers to be a suitable research approach. The use of cases is also beneficial where control over behaviour is not required or possible, as research data can be collected through observation in an unmodified setting (Yin 2003). While case study allows the capture of detail and the analysis of many variables, the method is criticized for a lack of generalizability, a critical issue for case study researchers. Because the corporate, team and project characteristics are unique to each case study, comparisons and generalizations of case study results are difficult and are subject to questions of external validity (Kitchenham et al. 2002). However, Walsham (1995) argues that, when using a case study approach, researchers are not necessarily looking for generalization from a sample to a population, but rather plausibility and logical reasoning through developing concepts and theory, drawing specific implications, and contributing rich insight.
This study uses a multiple-case design. The rationale behind a multiple-case design is that it allows a cross-case pattern search, which reassures the researcher that “the events and processes in one well-described setting are not wholly idiosyncratic. At a deeper level, the aim of multiple case study is to see processes and outcomes across many cases, to understand how they are qualified by local conditions, and thus to develop more sophisticated descriptions
and more powerful explanations” (Miles and Huberman 1994, p. 172). Researchers become more confident in a theory when similar findings emerge in different contexts and build up evidence through a family of cases.
Following these suggestions, three cases have been selected for this study, which allow a meaningful and stark comparison. Each case is based on an ISD team who have adopted a set of agile practices from XP, Scrum or both, and have used them for some time. Other relevant decisions are the unit and level of analysis. This study investigates individual agile practices in use, not agile methods in general, thus each agile practice is considered a suitable unit of analysis. The level of analysis is at team level. The study examines how each agile practice has been used and which assimilate stage it has reached in a team as a whole, rather than concerning the individual developer’s reaction towards the adopted agile practices.
**Background to the Cases**
Drawing conclusions of empirical results is always difficult because the results largely depend upon project settings, especially empirical research of agile ISD (Layman et al. 2006). The context of the ISD team plays an important role in field study. Table 3 provides an overview of the profiles of the three cases.
<table>
<thead>
<tr>
<th>Table 3. The Profiles of the Three Cases</th>
</tr>
</thead>
<tbody>
<tr>
<td>Team</td>
</tr>
<tr>
<td>Team size</td>
</tr>
<tr>
<td>Team composition</td>
</tr>
<tr>
<td>Location</td>
</tr>
<tr>
<td>Development method</td>
</tr>
<tr>
<td>Years of experience of agile methods</td>
</tr>
<tr>
<td>Type of system developed</td>
</tr>
<tr>
<td>Company background</td>
</tr>
</tbody>
</table>
Team A is an ISD team in a multi-national company. The company provides service-oriented architecture solutions. XP has been introduced and adopted company-wide more than 4 years ago. Before embarking on XP, the company already had an open culture for developers to communicate, collaborate and help each other. So XP was not viewed as a dramatically different way of development, rather, it was considered just explicating the way things should be done naturally. XP provides structure and discipline to the team. The company had intention to adopt XP in full scale, and the team had formal training on XP, but the practices have been adopted only loosely or partially, and some practices have met strong resistance. Generally speaking, test first and continuous integration are considered to be used to their most effects. Pair programming suffered biggest resistance.
Team B resides in a medium-sized company that provides IT services in security domain. It is the first ISD team that adopted agile methods in the company. The objective of the company to adopt agile methods is to respond to new threats in market significantly faster than its competitors. Before adopting agile methods, the company had based its systems development on the CMMI goals and practices, therefore, had a strong plan-driven culture. A project manager was seen as a key person responsible for a software development project. When the senior management made the decision to adopt agile practices, large Scrum and XP trainings have been organized for all project managers and developers. The project manager in Team B played the role of a key change agent who brought the
ideas from Scrum, such as self-organizing teams, and later on the XP practices to the team. The team members were provided with the possibility to select agile practices that they wanted to use as a part of their ISD process.
Team C is an ISD team in a software house specialized in network security and management systems. The team had failed to deliver its last project before embarking on a collaborative effort with an XP training laboratory. The team had carried out intensive training for 6 months before they returned home and applied the XP practices in their ongoing activities. They successfully completed several projects with the development approach they learnt. They felt they reached their goals of developing software “good, fast and cheap”, and "working in an enjoyable way". The team is the only one in the company using XP till now. The team see a clash between the way they work and the culture of the company.
Data Collection
Data was collected primarily through personal face-to-face interviews, which is considered the superior data gathering technique for interpretivist studies such as this (Yin, 2003). Personal interviews are also well suited for exploratory research because they allow expansive discussions which illuminate additional factors of importance (Yin, 2003, Oppenheim, 1992). Also, the information gathered is likely to be more accurate than information collected by other methods since the interviewer can avoid inaccurate or incomplete answers by explaining the questions to the interviewee (Oppenheim, 1992).
A guiding script was prepared for use throughout the interviews to establish a structure for the direction and scope of the research, to ensure the researcher covered all aspects of the study with each respondent, to manufacture some element of distance between the interviewer and interviewee, and to permit the researcher to compare and contrast responses (McCracken, 1988). The researcher circulated the guiding questions in advance to allow interviewees to consider their responses prior to the interview. The questions were largely open-ended, allowing respondents freedom to convey their experiences and views, and expression of the socially complex contexts that underpin software development and agile method use (Yin, 2003, Oppenheim, 1992).
The interviews lasted between 50 and 120 minutes with the average being approximately 85. The interviews were conducted in a responsive (Rubin and Rubin, 2005, Wengraf, 2001), or reflexive (Trauth and O'Connor, 1991) manner, allowing the researcher to follow up on insights uncovered mid-interview, and adjust the content and schedule of the interview accordingly. Furthermore, the researcher kept a diary of questions asked during each interview, and analysed their effectiveness, making refinements and additions to the set of questions prior to the next meeting. In order to aid analysis of the data after the interviews, all were recorded with each interviewee’s consent, and were subsequently transcribed, proof-read and annotated by the researcher. In any cases of ambiguity, clarification was sought from the corresponding interviewee, either via telephone or e-mail.
Documentation review and field notes were used as complementary data collection methods. Sources of documents include ISD documents, project management documents, corporate websites or brochures, and other publications available.
Data Analysis
The unit of analysis is at the ISD team level as opposed to the individual developer or indeed the organisation within which the team sits. The data has been analyzed in two stages. Following analysis of each case as a stand-alone entity or what Yin (2003) refers to as 'within-case analysis’, the researchers engaged in cross-case comparison. Within-case analysis takes the form of a detailed case writing-up for each case. Though descriptive, it is central to the generation of insight. The intention is to become intimately familiar with each case (Eisenhardt 1989). The focus of the data analysis is on the comparison of the three cases. Cross-case comparison is intended to enable a better understanding of different use modes of the agile practices and the assimilation stages they have reached. The data analysis process has been guided by the concepts and their interpretations presented in the previous section. They play as sensitizing and sense-making devices in the data analysis.
Coding is often used in qualitative research, systematically labelling concepts, themes, and artefacts so as to be able to retrieve and examine all data units that refer to each subject across the interviews. The coding structure adopted in
this research consisted of three distinct mechanisms. Firstly, an ID code was attached to each interviewee. Secondly, a classification schema was built, acting as what Miles and Huberman (1999) call a set of “intellectual bins”, used to segment and filter the interview data collected. Finally, pattern coding was used in order to “identify any emergent themes, configurations or explanations” (Miles and Huberman, 1999). This approach aims to aggregate and summarise the previous codes, identifying themes and inferences across all.
Findings
Agile Practices in Use through an Assimilation Perspective
This section discusses how each XP and Scrum practice has been used in the three cases, analysing adoption from an assimilation perspective.
Small Releases (Scrum Sprints)
Team A’s organisation has a general policy to issue a major product release every 12 months and a minor one every 6 months. The team collaborate with other teams on a product line and all follow 6-week milestones between the releases to coordinate their work. At the end of each milestone, there is a requirement to produce a demonstratable delivery from each team. Within the 6-week period, the teams are not obliged to use 2-week iterations. They are left to their own device to decide how to approach the milestones. Team A keeps 2-week iterations out of project management rather than technical development concerns. There is no delivery or acceptance test at the end of 2-week iterations, and so small releases practice is embodied at the 6-week milestone level, not at the 2-week XP iteration level.
Team B follows time-boxed monthly iterations as suggested by Scrum. Monthly sprint is considered a suitable iteration length for the type of product the team develop. Shorter iterations could not accommodate the features that need to be implemented for the working prototype. At the end of each sprint, prototypes are always presented to the product owner and various groups of stakeholders. Then sprint reviews are held to collect feedback from the stakeholders, to increase product visibility and to ensure the product fulfils the demands of the customers. Compared with Team A, Team B has used the small releases practice in a more stable and routinized way.
Team C uses this practice in a more sophisticated way, whereby one-week iterations have been used. The team have experimented with both longer and shorter iterations, and finally opted for one-week length which the team feel is the optimal pace for them. Generally the team prefer fixed-length iterations, but depending on the status of a project, the team do vary iteration length. However, no user requirement change is allowed to be introduced during an iteration. The customers can check the progress of the development anytime they want, or clarify their understanding of the requirements, but they can only introduce changes at the beginning of the next iteration. At the end of each iteration, the team always deliver a piece of working software to the customers, either in test environment or directly in the live environment. Working software is the only delivery at the end of an iteration, with no accompanying documentation. The customers test the deliverable according to the acceptance tests they write with the help of the team during planning games. The team guide the customers to write acceptance tests in such a way that it can identify precise business scenarios from business perspective and only address business complexity. The technical complexity is internal to the team and therefore should be hidden from the customers.
Planning Game (Sprint Planning)
Team A start with a planning meeting between the product manager and the managers of the teams involved at the start of each iteration. Within Team A, since 2-week iterations are not used as a routine, the planning game is only used when the team feel there is a need to plan, which then is based on the common-sense judgement, for example: “Should this piece of work be documented, if yes, better to have a plan for it” (engineering manager, Team A).
In contrast, Team B has a regular sprint planning meeting at the beginning of each monthly sprint, in which the requirements are defined at a more detailed level and put into a sprint backlog: “In the first few iterations I met with product owner, we prioritised the backlog…then in the sprint planning meeting we saw….whether there is something missing of the list, we reprioritised features and then took the items in the following sprint” (Scrum
Developers do not have much influence on how the requirements are selected and analyzed. They are, however, responsible for defining tasks and providing estimates for implementing them in that sprint. Customers are partially involved in a sprint planning meeting, but the product owner, who plays the role of customer proxy, is always involved in the meeting. “Team, product owner and delivery team are always in sprint planning meetings.” (developer, Team B)
Team C not only uses planning games routinely, but also uses different techniques to make them more effective. Each iteration starts with a planning game in which all the team members and the customers participate. User requirements are broken down and specified as user stories. Then the team analyze each user story and assign estimates to it. The estimates then are compared with the capacity of the team to implement user stories in that iteration. This gives the customers and the team an idea of what user stories can be implemented in that iteration. If not all stories can be implemented, the customers prioritize the stories and choose the ones that are to be delivered at the end of the iteration. Then the stories are further divided into tasks. Both tasks and stories are written on a quarter A4-sized paper cards. The team use different colours to distinguish between different types of cards: green for user stories, blue for tasks and red for spikes (exploratory tasks aiming at solving any issues emerged during the course of development). All the cards are then stuck to a whiteboard and arranged in categories. All information about user stories and acceptance tests are recorded in the project wiki, to which the customers also have access. Through planning as frequent as every iteration and using various techniques to support this activity, the attitude of the team towards planning is: “We do not plan the whole project from beginning to the end, it is useless.” (Developer, Team A)
**Daily Stand-up Meeting (Daily Meeting)**
Team A adopted the idea of stand-up meetings, but do not use it on a daily basis. They typically have a quick stand-up meeting at 12 o’clock in a day when they feel they have not communicated sufficiently for a while.
Daily meetings in Team B is a short session focused on answering the questions including “What did I work on yesterday?”, “What do I plan on working today?” and “What is getting in my way?” Daily meetings are held regularly for disseminating information and analysing metric data of the project status “*those meetings gave a daily view into what was going on in the project.*” Scrum master, Team B. Team A always uses metrics in the meetings for planning and project monitoring. Daily meetings are also an effective problem solving mechanism. Developers feel that most problems, including design issues, are actually solved based on the discussions in daily meetings.
Team C uses this practice more intensively than what is described “by the book”. The team has renamed this practice by a metaphoric term “steering”. They use it as a daily planning mechanism. In each working day, the team will have a brief steering session in the morning, to quickly plan what to do in that day, and to express technical obstacles and ask others for help. The session is intended to be very short. Steering can happen more than once a day. Sometimes, in addition to the morning steering, the team will have a steering session in the afternoon, to plan the work of the remaining day. Steering, as the name implies, is constant adjustment of where the team goes in the light of what happens around.
**Retrospective**
In Team A only the team manager formally reflected on their development process, but not on a regular basis. It is considered unnecessary by the team since they believe there are no issues with their process.
*(at team level do you do this kind of retrospective?) yeah we have, we are not doing it regularly... maybe it’s helpful, I mean again that’s, yeah, those are the things get lost. (Engineering manager, Team A)*
Team B holds retrospective workshops regularly at the end of each sprint to reflect on their process, to decide how to adopt other practices effectively. Retrospective also provides a mechanism to motivate the team. For instance, to motivate developers to unit testing and to try test-driven development “*In retrospectives we had several discussions with the team, why is the unit test numbers so low, why is the code coverage so going down instead of maintaining*” developer, Team B. The Scrum master uses retrospective workshops to present the concrete measurement data of the test coverage.
Retrospective happens more frequently in Team C, to the extent of every day. The team name this practice “feedback” and it is the first thing they do in a working day. During a feedback session the team members reflect on
the previous working day. The feedback is not only on the development process, but also on the feelings team members had, anxieties felt, what the team have achieved or whether something went wrong. The team record the feedbacks in the team wiki to remember the lessons learnt. The team have even extended the practice to self retrospectives. They believe that self reflection is a precondition for effective retrospective on their process, and effective communication and collaboration among the team.
40-hour Week
Team C is the only one that has followed the rule of 40-hour week. Based on this rule, the team have developed a framework to integrate other practices, including daily retrospective and daily stand-up meeting. A typical working day is eight hours, time-boxed into 15 working units (as shown in Figure 1). One unit is composed of 25-minute working time followed by a 5-minute break. During the day there are 2 long breaks, 15 minutes each. The team actually use a timer to help them to keep this pace. The timer rings after each 25 minutes, to remind the team members to have a break.
Team B does not follow the rule of 40 hour week but the team has applied the idea of time boxed iterations: “iterations in the team are time boxed and follows the same rule every time” developer, Team B. - -iteration starts always with half day sprint planning meeting, continue with daily meetings and end with half day iteration retrospective and half day sprint review meeting.
On-site Customer
On-site customer has not been fully implemented in any of the three teams. Team A and Team B use internal resources as customer proxies instead of real customers. Even the customer proxies are not always on-site. In Team A, product managers of the company play this role. Although the team try to communicate with them whenever possible, they are not always available to the team. For Team B the product owner and the Scrum master communicate daily with the customers. The real customers sometimes participate in sprint planning and review, but not regularly: “involvement of customer, even through conference calls, through onside visits, would be beneficial,” developer, Team B. In contrast, Team C manage to involve their customers in every planning meeting and acceptance tests weekly, and communicate frequently with them within the weekly iteration, to achieve the same effect that on-site customer can bring to them.
Pair Programming
Team A does not use pair programming on coding tasks, but rather primarily as a trouble shooting tool. Pair estimation of user stories has been used, sometimes successfully, sometimes not. Team B pairs on code review and new staff training “...the only real way to review code is actually pair programming”. Neither Team A nor Team B treats pair programming as a routine practice in their development process. In Team C, instead, developers always program in pairs on real code. They share one desktop, one using keyboard (as a driver), the other using mouse (as a navigator). Role switching happens when the navigator has some new idea. He can take the keyboard from the driver and start to write code, which is a preferred way of communicating ideas rather than discussion. Pairing is self-arranged and is not fixed. Pair rotation happens frequently. Between the two developers, generally the developer who is responsible for the task stays, the other goes to pair with a different developer.
Testing
This practice has been used by all the three teams, though Team B has only adopted it partially at a later stage of the project. Team B feel that testing activities are too complex and time consuming, and are not as primary as creating new features rapidly. As a consequence, this practice has not been used regularly at the beginning of the project. “we weren’t use test driven development, so we wrote the component first and then the unit test… later on, when we were quite more disciplined then we started writing the tests first and then the component” developer, Team B. In comparison, both Team A and Team C use testing coherently and consistently, especially test first. It is a part of the development routine. The two teams always write unit test code before writing function code. In addition, Team A has developed a comprehensive testing framework that combines test first and continuous integration, as shown in Figure 2. Team C, instead, discovered an emergent use of test first. They use it to learn third party software. To understand how a piece of software works, rather than reading documentation, they write tests and run them on it to understand its functionalities.
Continuous Integration
Continuous integration has been implemented consistently in Team C. In Team B, setting up the continuous integration framework was found difficult and time consuming, “around in the middle of the project we had stable build environment” developer, Team B. When the problems with the build environment were solved, the team B started to use continuous integration hourly. Team C continually integrate the well-refactored function code of different stories. Team A, as mentioned in “testing practice” section, have a more comprehensive test-oriented development framework which has also been adopted company-wide in a consistent way (see Figure 2). The framework integrates testing first and continuous integration to guarantee the high quality of the code.
Collective Ownership
In all three teams collective ownership has been endorsed by the team members and regarded as a natural result of team collaboration. Team A has achieved it on the code, as the practice is intended, but it goes beyond the scope of code in Team B and Team C. They are sharing other forms of working results within their members as well. For example, both teams have their team email addresses which they use to communicate with their customers. Any achievements are shared among team members. Another example from Team C is that when the team members respond to a customer’s request, they make the contact to the customer all together, and talk to the customer all together. No one claims the ownership of the solution. They collectively own it.
Simple Design
Simple design is not evident in Team A and Team B, but is highly embraced by Team C. Team C believes that design emerges from code; therefore simple design is embodied by simple code. Writing simple code is always what the team strive to achieve. Being simple is not only important in coding and designing; the team generalize the value
behind the practice and apply it to the communication. They believe that they have to be simple in order to effectively communicate with each other. The team’s understanding of simple design goes beyond the system:
*Good practices are to write simple stories, keep the system simple and your code simple by refactoring, and also studying new technologies that can help you to make things simple in a quick manner... learning also is a (way to) increase your skill and so is useful for managing complexity. I talked about simplicity, but all the XP values deal with complexity in some way.* (Developer, Team C)
**Refactoring**
Team A and B do not refactor every piece of code. They do so only if there is a good reason for it. They strive for working code that is useful, not perfect. Team C does not refactor every piece of code either; they do so when there are duplicated behaviours in the system. Compared with Team A, Team C does refactoring in a relatively more consistent and regular way. They refactor both test code and function code.
**Coding Standards**
All the teams conform to a set of coding standards they had even before they applied agile methods. It is a well established routine, but no evidence of any mode of use that indicates infusion has been found in any of the three teams.
**The Factors in Agile Practice Assimilation**
This section presents the findings regarding the factors in the assimilation of agile practices. Several patterns have emerged through the comparative analysis of the agile practices used in the three cases, combined with their organizational contexts. To better illustrate these patterns, Table 4 re-organizes the agile practices used in the three cases along the dimension of the three later assimilation stages.
<table>
<thead>
<tr>
<th>Assimilation stage</th>
<th>Team A</th>
<th>Team B</th>
<th>Team C</th>
</tr>
</thead>
<tbody>
<tr>
<td>Acceptance stage</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>- Small releases</td>
<td></td>
<td>- On-site customer</td>
<td>- On-site customer</td>
</tr>
<tr>
<td>- Planning game</td>
<td></td>
<td>- Pair programming</td>
<td>- Continuous integration</td>
</tr>
<tr>
<td>- On-site customer</td>
<td></td>
<td>- Testing</td>
<td>- Coding standards</td>
</tr>
<tr>
<td>- Retrospective</td>
<td></td>
<td>- Refactoring</td>
<td>- On-site customer</td>
</tr>
<tr>
<td>- Pair programming</td>
<td></td>
<td>- Continuous integration</td>
<td>- Continuous integration</td>
</tr>
<tr>
<td>- Refactoring</td>
<td></td>
<td>-</td>
<td>- Refactoring</td>
</tr>
<tr>
<td>Routinization stage</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>- Collective ownership</td>
<td>- Scrum Sprints</td>
<td>- On-site customer</td>
<td></td>
</tr>
<tr>
<td>- Coding standards</td>
<td>-</td>
<td>- Scrum planning</td>
<td>- Continuous integration</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>- Retrospective</td>
<td>- Refactoring</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>- Daily meeting</td>
<td>- Coding standards</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>- Coding standards</td>
<td>-</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Infusion stage</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>- Testing</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>- Continuous integration</td>
<td>- Collective ownership</td>
<td>- Small releases</td>
<td></td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>- Planning game</td>
<td>-</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>- Daily stand-up meeting</td>
<td>- 40-hour week</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>- Retrospective</td>
<td>-</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>- Collective ownership</td>
<td>- Pair programming</td>
</tr>
<tr>
<td>-</td>
<td>-</td>
<td>-</td>
<td>- Testing</td>
</tr>
</tbody>
</table>
*Table 4. A Summary of the Agile Practices at Different Assimilation Stages in the Three Cases*
Agile practices in use do not reach the same assimilation stage simultaneously, even if they are adopted at the same time.
Team A started to use all the practices listed in Table 4 at the beginning of their XP adoption. But only testing and continuous integration have reached the infusion stage. Some practices are routinized. Many others remain at the acceptance stage after more than 4 years of usage. Team C is in a similar situation, though in a slightly more concentrated manner. All the adopted agile practices have been routinely used, but not all have reached the infusion stage. Team B is different than the other two cases in that the team did not adopt all the practices together. They adopted different practices at the different stages of the project. However, the practices that were adopted at the same time, for example, Scrum sprint planning and pair programming, are at different assimilation stages.
The period of use of agile practices does not have a proportional effect on their assimilation stages.
This finding overlaps, to a certain extent, the previous one. However, it is more focused on a cross-case comparison of the assimilation stages of the same agile practice, rather than on a within-case comparison of the assimilation stages of different practices. Team A and Team C have both adopted agile practices for 4 to 5 years. However, as shown in Table 4, many practices remain at the acceptance stage in Team B while in Team C they have reached the infusion stage, for example, small releases and retrospective. In contrast, small releases and retrospective in Team B have reached routinization after only one and a half year of use. It may be argued that, based on this analysis, the duration of use does not have a proportional effect on the assimilation stage that agile practices reach.
The agile practices addressing the specific needs of the adopting team reach deeper assimilation levels.
In Team A, agile practices addressing management issues remain at the acceptance stage while development practices, such as testing and continuous integration, have reached the infusion stage within the same timeframe. One possible explanation can be found in the high quality need of Team A, which is a consequence of the software product line approach the company adopts. A problem with product line is that all the defects or bugs in one component can be repeated in all other components and products in the product line. This characteristic imposes a need of high quality. Quality related practices, such as test first and continuous integration, can address this need. Instead, the company of Team A has always an open culture. Developers communicate and collaborate freely. Mutual help goes on amongst developers. There is no need to emphasize these aspects through adopting the relevant agile practices. As a result, these agile practices, such as daily stand up meetings, do not reach the infusion stage even if they have been used for more than 4 years.
This finding can be supported fatherly by Team B in which the management practices of Scrum have achieved the routinization fairly quickly in only one year and a half of usage, because the team regarded these practices as a solution for the communication and change request management between the team and their customers.
Regular retrospective may drive agile practices to later assimilation stages.
Regular retrospectives happen in Team B and Team C, where most practices have reached either the routinization or infusion stage. In Team A, where most practices remain at the acceptance level, instead, retrospective does not happen at all. Since the purpose of retrospective is to review and improve the use of the used agile practices, retrospective would be one of the key mechanisms to drive agile practices to reach a deeper level of assimilation.
Discussion
Agile practices have been used in different modes and have reached different stages of assimilation in the three cases presented in this paper. The emergent patterns reveal the factors in the assimilation of agile practices. The implication of the findings are discussed in this section.
One implication can be drawn from the finding that the duration of usage does not affect the assimilation stage of an agile practice. It resonates to the argument of Fichman (1999). Innovation adoption studies are concerned with understanding the deployment of innovation by following its spread across a population of potential adopters over time (Fichman 1999). However, Fichman (1999) reminds that for some technologies it may be inappropriate to assume that in most organizations these later assimilation stages will automatically follow earlier stages. The finding
of this study indicates that time may not be an appropriate indicator in the evaluation of the assimilation stages of agile practices. This is an important aspect to take into consideration for the studies that intend to understand how agile practices have been routinized and infused in a team or even at a company level. Time is an important dimension in this kind of study, but one needs to be cautious when using it as to measure the levels that agile practices penetrate into adopting units.
Another finding suggests that high needs may lead relevant practices to their comprehensive or sophisticated use and a deeper level of penetration in adopting organizations. This finding provides an insight of how to effectively adopt agile practices. Since the agile practices addressing the needs of a adopting organization have potential to be routinized or infused, it is sensible to identify the areas that an organization needs to improve, and then select relevant agile practices to adopt rather than taking the whole set of practices in an agile method. Therefore, this study supports the adoption approaches presented in Svensson and Host (2005), Pikkarainen and Passoja (2005), and Pikkarainen and Mäntyniemi (2006). These approaches identify firstly the organization challenges, then map them with agile based solutions (i.e., practices), to understand what agile practices should be introduced and why they should be, before the actual deployment of agile practices. For example, Svensson and Host (2005) presents experiences of introducing agile practices in a software maintenance and evolution organization. Their study is focused on analyzing which agile practices are easiest to introduce in a development project. It claims a similar finding that agile practices should be introduced in the area where it is currently seen as most beneficial. For example, code related practices were introduced because of the phase in which they were maintaining and supporting several systems. In the study of Pikkarainen and Mäntyniemi (2006) agile adoption is started with identifying the areas of an ISD process that needs improvement through a CMMI goal-mediated framework.
The innovation adoption theories used in the study presume that infusion of agile practices could not happen before they are routinely used and become a part of the normal organizational life. This is in agreement with Zmud and Apple (1992) who show empirical evidences to support the argument that an organization could not achieve high level infusion without having high level routinization first. Zmud and Apple (1992) suggest that routinization and infusion are capturing important aspects of innovative organizational behaviour. Teams using agile practices as a routine part of their development work may in a good position to discover innovative ways of using them, thus have potentials to be innovative. On the other hand, this study also finds that some level of integrated or emergent use of agile practices can occur also in the acceptance stage of assimilation. For example, pair design is an emergent use of pair programming, but it is not used regularly either in Team A or Team B. Can integrated or emergent use at the acceptance stage be viewed as an innovative behaviour even though it is yet to be routinized? It is a question yet to be explored.
An examination of the use modes of retrospective in the three cases gives rise to an assumption that there might be a link between the frequency of retrospectives and the assimilation stages that agile practices can reach. The effects of iteration retrospective on development teams have been studied in Salo and Abrahamsson (2006). They show that teams are able to do continuous, small but effective improvements through iteration reflective workshops. Meanwhile, the reflective mode of thinking may improve the developers’ understanding of their own processes (Hazzan and Tomayko 2003), which may in turn lead agile practices to deeper levels of penetration in the adopting organizations.
Finally, in this study integrated use of agile practices is re-defined as interconnected use and is taken as an indicator of infusion stage. Many studies present examples of interconnected use of agile practices (Rasmussen 2003; Vanderburg 2005; Fitzgerald et al. 2006; Layman et al. 2006). Rasmusson (2003) discovers that introducing test driven development technique depends on team learning which happens during pair programming. This study showed that agile practices are not isolated or independent, and they can be used in an integrated way. Layman et al. (2006) find that pair programming can be used as a framework to integrate other agile practices. These studies show that interconnected use of agile practices may provide new schemes for ISD.
**Conclusion and future work**
Many studies have investigated adoption and application of agile practices in real-world contexts, but few have attempted to understand routinization and infusion of agile practices. This paper contributes to the literature by making a first attempt to conceptualize the later assimilation stages of agile practices and to explore agile practices in use through an innovation assimilation perspective. Three cases have been presented and compared to evaluate the assimilation stages of the agile practices they have used. Among the key findings are that time may not be an
appropriate indicator of assimilation stages of agile practices, instead, needs of an adopting unit may be a driving force. Agile practices addressing the needs of a software development team have potentials to achieve later stage of assimilation. The methodological contribution of this study is that it shows a possible way to apply the concepts from innovation adoption theories in evaluating the use of agile practices in systems development.
There are obvious limitations of any case study research including issues regarding generalisation of findings from the data collected. Another specific methodological limitation of this study is that, since only snapshots of agile practices in use have been taken and analyzed, it is not possible to understand the factors in the assimilation of agile practices from a process perspective. A research question associated to the process perspective is how different agile practices reach assimilation stages. The concepts from innovation adoption theories and their interpretation need further evaluation and extension to make it more adequate for the study of agile practices. One possible avenue for further research is to examine agile method practices beyond those covered in this study i.e. XP and Scrum. Methods such as Lean Software Development, Feature Driven Development, Agile Project Management, Crystal and Adaptive Software Development are all methods that could be assessed. Secondly, a more quantitative approach could be adopted by means of a large-scale survey. This could be used to determine the levels of agile method and agile practice assimilation across the ISD community, with results that are more generalisable than those contained in this research. This could reveal interesting insights such as which agile methods are most assimilated and why. The study could also examine the barriers and facilitators affecting this assimilation. Further research could also examine the effectiveness of agile method adoption. This study was descriptive in nature with the objective being to understand the extent of assimilation, but there was no attempt to correlate assimilation to effectiveness or success.
Acknowledgements
This research was conducted within the ITEA Flexi project which Lero, the Irish Software Engineering Research Centre is one of the research partners funded by Enterprise Ireland. Part of the work was done in VTT, Technical Research Centre of Finland funded by National Technology Agency of Finland (TEKES) and Nokia Foundation in Finland. A special acknowledgement to Brian Fitzgerald for his valuable advice on the research and the paper.
References
Boehm, B. “Get Ready for Agile Methods, with Care”, IEEE Computer (35), 2002, pp. 64-69.
Approaches to Information Systems Development
|
{"Source-Url": "https://researchrepository.universityofgalway.ie/server/api/core/bitstreams/09b91464-6918-45d0-b5c8-0ad35ac84660/content", "len_cl100k_base": 12519, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 46536, "total-output-tokens": 15548, "length": "2e13", "weborganizer": {"__label__adult": 0.0004374980926513672, "__label__art_design": 0.0003795623779296875, "__label__crime_law": 0.000293731689453125, "__label__education_jobs": 0.0024356842041015625, "__label__entertainment": 4.845857620239258e-05, "__label__fashion_beauty": 0.0001798868179321289, "__label__finance_business": 0.0004944801330566406, "__label__food_dining": 0.0003445148468017578, "__label__games": 0.00052642822265625, "__label__hardware": 0.0005288124084472656, "__label__health": 0.00038552284240722656, "__label__history": 0.0002815723419189453, "__label__home_hobbies": 9.423494338989258e-05, "__label__industrial": 0.0003650188446044922, "__label__literature": 0.0003228187561035156, "__label__politics": 0.0002562999725341797, "__label__religion": 0.0004119873046875, "__label__science_tech": 0.003873825073242187, "__label__social_life": 0.00010883808135986328, "__label__software": 0.0034580230712890625, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.0003833770751953125, "__label__transportation": 0.0005421638488769531, "__label__travel": 0.00022268295288085935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68814, 0.01626]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68814, 0.38773]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68814, 0.9337]], "google_gemma-3-12b-it_contains_pii": [[0, 1019, false], [1019, 2588, null], [2588, 7731, null], [7731, 11919, null], [11919, 16411, null], [16411, 20462, null], [20462, 24207, null], [24207, 28872, null], [28872, 33404, null], [33404, 38279, null], [38279, 41721, null], [41721, 44820, null], [44820, 48025, null], [48025, 52749, null], [52749, 58146, null], [58146, 62774, null], [62774, 67040, null], [67040, 68814, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1019, true], [1019, 2588, null], [2588, 7731, null], [7731, 11919, null], [11919, 16411, null], [16411, 20462, null], [20462, 24207, null], [24207, 28872, null], [28872, 33404, null], [33404, 38279, null], [38279, 41721, null], [41721, 44820, null], [44820, 48025, null], [48025, 52749, null], [52749, 58146, null], [58146, 62774, null], [62774, 67040, null], [67040, 68814, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68814, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68814, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68814, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68814, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68814, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68814, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68814, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68814, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68814, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68814, null]], "pdf_page_numbers": [[0, 1019, 1], [1019, 2588, 2], [2588, 7731, 3], [7731, 11919, 4], [11919, 16411, 5], [16411, 20462, 6], [20462, 24207, 7], [24207, 28872, 8], [28872, 33404, 9], [33404, 38279, 10], [38279, 41721, 11], [41721, 44820, 12], [44820, 48025, 13], [48025, 52749, 14], [52749, 58146, 15], [58146, 62774, 16], [62774, 67040, 17], [67040, 68814, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68814, 0.26587]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
d8cc713e3f63df8a9b998a6de2a2865c08699b29
|
COLLABORATION AND CONFLICT IN SOFTWARE REVIEW MEETINGS
GIOVANA B. R. LINHARES
Graduate Program in Informatics (PPGI), Federal University of Rio de Janeiro,
Rio de Janeiro, RJ 20010-974, Brazil
giovana_linhares@hotmail.com
MARCOS R. S. BORGES
Graduate Program in Informatics (PPGI), Federal University of Rio de Janeiro,
Rio de Janeiro, RJ 20010-974, Brazil
mborges@nce.ufrj.br
PEDRO ANTUNES
Faculty of Sciences, University of Lisbon, Campo Grande,
Lisbon, 1749-016, Portugal
paa@di.fc.ul.pt
Received (Day Month Year)
Revised (Day Month Year)
Communicated by (xxxxxxxx)
This paper discusses the collaboration-conflict process: a binomial process mixing collaboration and conflict. We applied the collaboration-conflict process to Software Review Meetings, commonly adopted to verify the functional specification of software. We developed a groupware tool demonstrating the dynamics of the collaboration-conflict process in review meetings. We also provide results from an experiment with the tool in a software engineering firm. The results show that the collaboration-conflict process promotes argumentation and generates better reviews.
Keywords: Review Meetings, Negotiation, Collaboration-conflict Process.
1. Introduction
Software Review Meetings (SRM) are recommended quality assurance activities in software engineering.¹ SRM involve designers, developers and testers in the verification of software at various points in the product development lifecycle. They allow determining if a product is being developed with quality and consistency with the specifications, i.e. it supplies the right solution to the requirements specified by the client.
In spite of common corporate goals, the participants in SRM often develop conflicting perspectives, interpretations and positions regarding the product quality. This type of conflict justifies the collaboration-conflict
¹
process: a process integrating conflict management in collaboration. We thus have, on the one hand, the review activity that has to be fulfilled by a group of persons and, on the other hand, the collaboration-conflict process necessary to accomplish the review activity with success.
Groupware may simultaneously support SRM and collaboration-conflict processes. Unfortunately, resolving conflicts and getting to consensus is a complex problem. One major intricacy is dealing with the main assumptions behind conflict resolution: (1) the interlocutors have diverse profiles, interests, viewpoints, and strategies that should be respected and often promoted to reach high-quality results; (2) in this context, reaching consensus requires a collective cognitive effort to understand the different positions and negotiate acceptable solutions; and (3) the process should be simultaneously fast and thorough, two goals that are often difficult to reconcile.
Many groupware systems emphasize collaboration to the detriment of conflict management, for instance adopting a strict focus on participation and shared information. Such an approach may however fail, either because conflicts may remain dormant, just to arise later; or they may escalate to unacceptable levels, making it more difficult if not impossible to accomplish the corporate goals without explicit negotiation. It is therefore necessary to balance collaboration and conflict.
The problem discussed in this paper concerns the lack of collaboration-conflict balance observed in the current groupware tools. Our research tries to supplant this lack of balance by integrating models of collaboration and conflict. This research guided the development of a groupware tool supporting SRM in the Functional Specification phase.
The adopted research approach is based on the Design Science paradigm. This problem-solving paradigm has its roots on engineering. It seeks to understand how technology may contribute to solve specific problems in particular domains. The Design Science paradigm emphasizes two main research goals: (1) establishing relevance through the identification of requirements and field-testing of concrete solutions, which in our case is accomplished by the FTR tool; and (2) establishing rigor by grounding the technology development in solid conceptual foundations, which in our case concerns the collaboration-conflict model.
The paper is organized in six sections. In Section 2 we discuss the research’s theoretical foundations. In Section 3 we describe the collaboration-conflict process. Section 4 describes the developed prototype. Section 5 describes an experiment carried out with the
prototype. And finally Sections 6 and 7 present some points for discussion and the conclusions from this research.
2. Theoretical Foundations
2.1. Behavioral foundations of software review meetings
Many recent software development approaches emphasize participation and collaboration as critical to improve performance. Examples include agile\(^4\) and open source\(^5\) software development. SRM follow the same assumptions, relying on collaboration to improve the early detection and correction of defects in software development.\(^6\)
SRM involve groups of experts, following formal procedures and designated roles, in the discovery of discrepancies between software specifications and other software documents, standards, and best practices.\(^7\) Johnson\(^8\) found out that these discrepancies can be one or two orders of magnitude less costly to remove when found in early development stages than after being released to customers; and also realized that SRM are effective in discovering certain soft, but nevertheless costly, defects such as logically correct but poorly structured code.
In accordance with the International Software Testing Qualifications Board, the roles and responsibilities involved in SRM include:\(^6\)
- Manager: Has responsibility for the final decisions;
- Moderator: Is responsible for the success of the review meeting. Leads the meeting and balances the discussions. Whenever necessary, also arbitrates conflicts;
- Authors: Submit software artifacts for review and explain and justify their decisions;
- Reviewers: Identify, analyze and question the defects found in the artifacts under review;
- Secretary: Documents what happens in the review meeting, registering the defects and final decisions.
D’Astous et al\(^1\) conducted observational studies to identify and characterize the predominant configuration of exchanges associated with SRM: 1) solution-elaboration; 2) solution-evaluation; 3) solution-evaluation-elaboration; 4) proposition-opinion; and 5) opinion-arguments. This indicates that both conflict (negative evaluation) and collaboration (elaboration of an alternative solution) play an important role in SRM.
2.2. **Collaboration-conflict model**
Abstracting from the patterns observed by D’Astous et al, we may define a *behavioral model* underlining two very distinctive behaviors:
- **Conflicitive**: When reviewers assert negative evaluations of solutions proposed by the authors. When authors and evaluators provide negative opinions regarding the others’ propositions.
- **Collaborative**: When reviewers seek to compensate negative evaluations by elaborating upon solutions or proposing new solutions. When authors and evaluators provide positive opinions regarding the others’ propositions.
These behaviors define the *collaboration-conflict spectrum* of the exchanges between reviewers and authors in SRM. Of course the participants may continuously change from one behavior to another along the review process. Though what should be noted is that, depending on the contingencies of the specific situation, these behaviors may be equally supportive and harmful to the quality of the SRM outcomes.
For instance, excessive collaboration may lead to groupthink, which has been considered detrimental to the decision quality. Also, extremely conflictive behaviors may lead to unsuccessful review meetings. Interestingly, dealing with conflict has been considered a way to avoid groupthink and collaboration is also a viable way to overcome conflict. Thus the two behaviors may actually be necessary to improve the SRM quality.
Our model is based on the assumption that (1) review meetings should not gravitate towards being strictly collaborative or strictly conflictive but instead should reflect the whole spectrum of behaviors. The model also considers that (2) both collaboration and conflict should be stimulated in particular circumstances, since they are necessary to counterbalance the negative effects of each other.
Fricker and Grünbacher distinguish between single-party groups, which are highly cohesive and thus pursue the same goals, and multiple-party groups, which appear on different sides of the table. Multiple-party groups may be further classified as differentiated, homogeneous and collaborating. The former compete between each other, while homogeneous groups have the same aspirations but different opinions, and collaborating groups seek an agreement that may be beneficial to all group members. Thus in our behavioral model we should also consider that (3) collaboration and conflict may emerge at different grades, from
cohesive to differentiated, homogeneous and collaborating. Research in conflict resolution has found out that the adopted strategies depend on various factors such as personal style, gender, organizational influences and culture. This reinforces the argument that collaboration and conflict should coexist in SRM, and that no particular predisposition to benefit one over the other should be adopted.
2.3. Computational support to the collaboration-conflict model
Thomas considers that beyond behavioral predispositions, cultural factors and social pressures, the adoption of collaborative and conflictive behaviors may be influenced by rules, procedures and incentive structures. We ponder that, by controlling these elements, technology may explicitly influence human behavior in SRM. We may distinguish the following types of influence:
1. Using technology to manage the process;
2. Using technology to intervene in the process as a facilitator or mediator.
3. Using technology to develop incentive mechanisms that promote the process quality.
Exemplary of the first type, we find Online Dispute Resolution systems, which manage the definition of goals, preferences, offers and counteroffers, and settlements. In the second type we find intelligent mediation tools. They employ automatic or semi-automatic mechanisms to monitor activity, identify problems with participation, and to assist their resolution through human interventions and information management mechanisms.
The WinWin negotiation model for requirements inspection has applied intelligent mediation to SRM, offering mechanisms to detect software defects like missing capabilities and hidden requirements, and promoting agreements using brainstorming, categorizing and polling tools.
The third type addresses the collaboration-conflict model in more subtle and diverse ways than the previous ones, using technology to influence the participants’ behaviors but without explicit control. Within this category we may find several technology-designed mechanisms:
- Providing awareness of conflicts.
- Supporting conflict detection and traceability.
- Visualizing preferences and settlement spaces.
• Promoting knowledge exchange and alternative problem/solution representations.\textsuperscript{28}
• Promoting certain positive values such as anonymity, constructive criticism, participation and consensus.\textsuperscript{29-31}
• Detecting and discouraging certain malicious acts.\textsuperscript{32}
Some of these incentives have been used with success in crowdsourcing systems like the Wikipedia and open source software development.\textsuperscript{33,34} For instance, Wikipedia offers talk pages and controversial tags to facilitate conflict resolution.\textsuperscript{33} Also in the software engineering field, Ramires et al\textsuperscript{30} experimented several mechanisms to promote consensus in software requirements validation, supporting multiple individual preferences, consensus solutions, and also rating users according with their conflictive or collaborative behaviors.
3. The Collaboration-Conflict Process
In this section we elaborate the collaboration-conflict process, which provides a particular implementation of the model discussed in the previous section. This implementation is necessary to evaluate the model assumptions.
We conceptualize the collaboration-conflict process as a combination of three functions: (1) review, (2) negotiation, and (3) argumentation. Let us now elaborate these functions in detail. Words in bold call the attention to key concepts.
**Review.** The review meeting may be characterized according with the following phases:
• **Review statement:** What triggers the meeting, consisting of a list of review items such as specification documents and programming code.
• **Scores:** In this phase, the participants give scores to the review items. We currently support three scores: accept, reject and accept with restrictions. This phase may involve negotiation (described below).
• **Decision:** After assessing the various review items, a decision must be made about the review. This final phase involves analyzing the scores given to each review item, equating their impact on the overall review and determining if the review fails or succeeds.
**Negotiation.** The negotiation phase is prompted by conflicts. There is a conflict when two or more reviewers give different scores to a review item. A conflict should only be resolved through negotiation. Multiple
negotiations may occur during a review. A negotiation evolves according with the following steps:
- **Proposals**: In the first step, the different scores given to a review item are treated as negotiation proposals submitted to the participants.
- **Search for consensus**: The participants have to reach a **final score** for the item. This step may require argumentation (described below).
- **Closure**: A negotiation is closed when a final score is defined for a review item.
**Argumentation.** The search for a final score may require the confrontation of arguments. We adopted an argumentation model based on the Issue Based Information System model, which has been used in software engineering to capture design rationale. The argumentation model defines the following elements:
- **Positions**: Several positions are expressed in favor or against a proposal. The positions are automatically inferred from the scores attributed by the participants to the review items (for instance, the participants that gave a reject score are against the participants that gave an accept score and vice versa).
- **Arguments**: Concise pieces of text giving strength to positions. In order to enforce argumentation, the participants are requested to complement the reject and “accept with restrictions” scores with arguments.
Notice that the review, negotiation and argumentation activities are entangled and concurrently executed. The data model of the collaboration-conflict process is organized around the various elements identified above: review statement, review item, score, final score, proposal, position, argument and decision. Figure 1 depicts this model.
We also observe that the goal of the collaboration-conflict process is not necessarily to obtain consensual scores for every review item. Several rules may be defined regarding what results should be drawn from the individual scores. The following rules may be considered: majority voting, where the final score is determined by the majority of the participants; consensus voting, i.e. there is only a result if it corresponds to the same score selected by all participants; and manager decision, where the manager decides the final score based on the participants’ scores.
After obtaining the final scores, the whole review statement should be subject to a final decision. Again, several rules may be adopted to reach the final decision. We adopted the following types of decisions in our implementation: (1) full acceptance, when all participants accept; (2) general reject, if there is at least one reject; (3) postpone, if there is more than a predefined number of accepts with restrictions; and (4) general acceptance otherwise.
3.1. **Factors affecting the process**
Every collaboration-conflict process, although structured according with the phases previously described, has its own dynamics and depends on a set of factors that interact between themselves, interfering with the process outcomes. We highlight the following contextual factors:
- **Level of conflict** - As the level of conflict increases, so does the cognitive effort necessary to negotiate and argue. At the limit, a destructive level of conflict will lead to a failed process. The number of
suggested proposals, positions and arguments may serve to measure the level of conflict.
- **Number of participants** - A large number of participants may also turn it more difficult to negotiate.
- **Status differences** - Status differences address the dependence relationships between leaders and subordinates. Groups having status differences may be negatively affected by the dependence on people with more power. The balance between the participants’ proposals, positions and arguments may serve to measure the effects of status differences.
- **Problem involvement** - A low involvement with what is under discussion may turn it more difficult to participate in the process. The number of suggested proposals, positions and arguments may serve to measure the problem involvement.
- **Group expertise** - The lack of expertise about the problem under discussion may also affect the process outcomes. This factor may be measured by assessing the quality of the presented arguments.
### 3.2. Quality criteria for assessing the process
It is fundamental to define quality criteria for assessing the collaboration-conflict process. However, the selection of criteria is quite challenging. Let us consider, for example, a situation where a decision is immediately reached after a small number of proposals; and contrast it with another situation in which, after a long argumentation, several proposals were discussed.
We may assume the first case has low quality while the second case has high quality. This assumption may however be misleading. For instance, it is possible that the first case has low complexity and relevance, and the adopted decision is not only adequate but also efficient. On the contrary, the second case may correspond to a situation where conflicts may have lead to a suboptimal decision, having the additional cost of spending too much time to finish the process.
When considering negotiation processes, quality has been fundamentally associated with efficiency. For instance, the distance between the agreed solution and the best possible solution that could be obtained by continuing the process, designated value-left-on-the-table, is commonly used to evaluate the quality of negotiation processes. This approach is however more adequate to bargaining than to collaboration-
conflict, since the former is influenced by the zero-sum game while the later is more influenced by “satisfying” trade-offs.41
When considering collaboration processes, quality tends to be measured according with a diverse set of variables categorized as efficiency, effectiveness, satisfaction, and consensus.42 This suggests the quality of collaboration-conflict processes should be measured according with a combination of criteria, for which we suggest:
• Efficiency - Time to complete the task.
• Flexibility - Number of positions changes to converge with the majority.
• Contribution - Number of arguments produced by the participants.
4. FTR Tool
This section describes the tool we developed to support the process described in Section 3. We first describe the specific requirements of the groupware tool and associate those requirements with the particular characteristics of the collaboration-conflict process. We then describe the tool’s architecture and interface.
4.1. Addressing the collaboration-conflict model
One fundamental characteristic of the FTR Tool is making the collaboration-conflict process explicit to the participants. It is not enough to manage messages exchange according with the typical tags like topics, contents, authors, etc. Specific tags are necessary to position messages within the collaboration-conflict spectrum.
To illustrate the problem, consider that messages exchange is supported through a typical e-mail tool. The tool preserves the exchanged messages in their temporal order, but the collaborative and conflicting behaviors are not easy to discriminate and follow. This is particularly true with asynchronous interaction.44 As participants tend to mix several types of contributions into a single message, it is not easy for a remote participant to keep track of the interventions according with the collaboration-conflict continuum, which means the participants have to overcome this ambiguity by constantly assessing and reassessing the messages’ contents.
To reduce these problems, the FTR Tool adopts the argumentation model described in Figure 1. This model assures that exchanges messages may be tracked according with relevant criteria like positions in favor or against and arguments.
It is also important to give the moderator an overall view of the participants’ contributions according with the collaboration-conflict continuum. The FTR Tool addresses this issue with a *participameter*.
Table 1 summarizes the participameter information collected by the FTR Tool and delivered to the moderator.
**Table 1. Individual assessment information**
<table>
<thead>
<tr>
<th>Assessment</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Contributing Positions</td>
<td>Number of proposals from a participant in relation with the total number of proposals.</td>
</tr>
<tr>
<td>Contributing Arguments</td>
<td>Percentage of arguments from the participant in relation with the total number of registered arguments.</td>
</tr>
<tr>
<td>Punctuality</td>
<td>Average of time to complete the task, as a percentage of time assigned to the task.</td>
</tr>
<tr>
<td>Relevance of Arguments</td>
<td>Number of arguments from a participant that contributed to the final score in relation with the total number of arguments.</td>
</tr>
<tr>
<td>Flexibility to converge</td>
<td>Number of score changes to converge with the majority, in relation with the total number of score changes to converge with the majority.</td>
</tr>
</tbody>
</table>
### 4.2. FTR implementation
Any FTR requires several pre-arrangements from the moderator. The FTR Tool supports some of these activities. It allows importing the review documents into the system. It also allows presenting the reviewers’ initial proposals and comments, and selecting for discussion (with control by the moderator) the artifacts that seem more conflicting.
Another important supported function is allowing the moderator to check for duplicates and equivocal statements. Using the FTR Tool, the moderator may turn doubts, problems, comments, alternatives, and solution into validated proposals for assessment by the reviewers in the next phase. We note however that this preliminary phase is not the main focus of our research. We actually concentrated our research on the support to the second phase: the collaboration-conflict process.
The second phase starts when the moderator sends the first validated proposals to the reviewers. New proposals may be delivered during the review if necessary. To ensure confidentiality, the proposals are dissociated from the original authors.
During the second phase, the reviewers register their scores. Each reviewer may associate a score to a proposal, reflecting his/her judgment about the proposal (0 - not an error/accept; 1 - light error/accept with restrictions; 2 - serious error/reject). In case the chosen score is 1 or 2, the reviewer is requested to complement the score with arguments, consisting of small text sentences. All arguments should be linked to a Functional Specification Document. Examples include: “the item cannot be related
with the Functional Specification and should be removed”; “the item does not comply with the specification of function X”; or “the item fails to implement requirement Y”.
The positions in favor and against each proposal are automatically calculated. The divergences are shown to the reviewers without exposing the identities of the opponents. After evaluating the arguments associated with one proposal, a reviewer may change his/her own position or add additional arguments. The changes in positions update the associated arguments. This procedure may be repeated until closing a proposal with the final score. Updates to positions and arguments are visible to all reviewers.
When there are no positions against a proposal, it is immediately “closed” and the final score is known. In order to cover all proposals assigned to a FTR session, a “closed” proposal cannot be reopened in the same session. Also for efficiency reasons, the proposals are controlled by a timeout mechanism. The moderator is responsible for setting the time limits and closing the proposals when the time limits are reached. The reviewers are notified before the proposal time is out. After all proposals are closed the process advances to the decision phase.
The decision phase will determine the output of the FTR. As previously mentioned, a consensual score may not be achieved for every proposal. In order to close the review, the moderator may adopt three different strategies: majority voting, deciding another negotiation round, and assigning his/her decision. The moderator selects one of these rules before starting the session to guarantee the transparency principle.
4.3. Additional implementation details
The FTR tool was built using the Microsoft .Net framework and C# language. Being a Web application, it can be used at any time and place. The adopted database manager was SQL Server. To illustrate the prototype, we present some screen dumps.
Figure 2 shows the beginning of the FTR session. The proposals selected by the moderator are displayed at the left. The participants enter their positions on the right.
Collaboration and Conflict in Software Review Meetings
**Fig. 2. Registering the participants’ positions.**
**Fig. 3. Argumentation of divergent positions.**
Figure 3 illustrates some possible outcomes of the collaboration-conflict process. A proposal should be negotiated when it receives different scores. The arguments associated with each position provide rational elements for the change of positions. Notice that in the illustrated example there are no arguments associated with proposal 2 (row 2) because the scores were consensual.
Figure 4 shows how arguments are inserted. For each proposal, the system shows its positions. When a position is added, the system opens a text box for writing an argument.
When the session is completed, a summary is generated. This is shown in Figure 5. When assessing the results, the moderator is able to decide on the next steps. One alternative is giving the participants more time to analyze documents and code, and then scheduling another session. Another possibility is making a decision on the proposals that have not reached consensus.
5. An Evaluation of the FTR Tool
An evaluation of the FTR tool was carried out and its results were compared with those obtained with a standard FTR. The evaluation was
conducted in a telecommunications company operating in Brazil. We use the fictitious name BTC to preserve the anonymity of both the company and the participants. The main purpose of the evaluation was to obtain qualitative insights about the collaboration-conflict model, the process and the FTR tool.
5.1. Experimental setting
BTC subcontracts several software companies to develop software artifacts. The subcontracted companies may be located in Brazil or abroad. Before formally concluding these contracts, all artifacts delivered by the subcontractors must be submitted to a quality assurance process that evaluates them against the specifications described in the contracts. Depending on the task complexity, quality assurance may demand considerable time and effort from both parties. The standard FTR has been used for a few years and all members of the quality assurance team considered that changes could be done to improve it without reducing quality, especially because the FTR were done face-to-face and often involved foreign subcontractors.
The standard FTR engages from five to eight people: up to four authors, three reviewers and a leader. For the evaluation sessions we planned a similar team. However, we had to define how to compare the standard FTR against our approach. In theory we had two alternatives:
1) Select an artifact, perform the standard and new FTR using two different teams, and then compare the results;
2) Assign the same team to two different but equivalent artifacts; and have the team successively apply the standard and the new FTR approaches.
Both alternatives had some constraints. We could not count on real subcontractors to play the authors’ role due to the costs involved. We also did not have formal authorization from BTC to apply the new FTR approach in real reviews. But we still wanted to use real data in our evaluation. We thus adopted a variation of alternative 1: Select two recent artifacts and recover their FTR records, which were already concluded through the standard process. This corresponded to a post-hoc analysis of the FTR process.
After that, we rerun the FTR with the same two artifacts but using a different review team. From an experimental point of view, this corresponds to repeating samples with different subjects and experimental conditions (traditional FTR and our approach, using the FTR tool).
We then compared the results of both samples. We are aware of the limitations of this schema, but we preferred that to using, for example, a totally artificial setting, such as having students playing the FTR.
For the comparison we used three criteria: (1) number of proposals; (2) number of arguments; and (3) number of changed positions toward consensus. The comparison was directed by the assumption that higher these indicators were, higher was the quality of the reviewing process. The number of changed positions toward consensus was an indicator that deserved some further analysis, as discussed later.
5.2. Evaluation results
In the following description of the evaluation results we will refer to the artifacts as FE29520 and FE22520. First, it should be noted that the reviewers rejected them both. When comparing the results, we observed that the new FTR method resulted in increased numbers of arguments and changed positions towards consensus. This may be a sign that the FTR tool promotes higher levels of argumentation than traditional FTR.
A summary of the obtained quantitative results is reproduced in Tables 2 and 3. The standard FTR of FE29520 resulted in 4 proposals, 6 arguments and 2 changed positions. The new FTR (using the FTR Tool) resulted in 31 proposals. The first session resulted in 15 arguments and 3 changed positions, and the second session an additional 62 arguments and 1 changed position. Regarding the standard FTR of FE22520, we had 9 proposals, no arguments and no changed positions. The new process (FTR Tool), on the other hand, resulted in 23 proposals, 1 argument and also no changed positions. The results from FE22520 show that the participants (and in particular the leader) took the immediate decision to reject the functional specifications, which explains the absence of arguments.
<table>
<thead>
<tr>
<th>Indicators</th>
<th>Standard FTR</th>
<th>FTR Tool</th>
</tr>
</thead>
<tbody>
<tr>
<td>Elapsed Time</td>
<td>3</td>
<td>8</td>
</tr>
<tr>
<td>Number of proposals raised</td>
<td>4</td>
<td>31</td>
</tr>
<tr>
<td>Number of sessions</td>
<td>3</td>
<td>2</td>
</tr>
<tr>
<td>Number of arguments placed by reviewers and authors</td>
<td>6</td>
<td>15 and 62 *</td>
</tr>
<tr>
<td>Number of changed positions towards consensus</td>
<td>2</td>
<td>3 and 1*</td>
</tr>
</tbody>
</table>
* First and second sessions, respectively
Collaboration and Conflict in Software Review Meetings
Table 3. Indicators of FE22520 review
<table>
<thead>
<tr>
<th>Indicators</th>
<th>Standard FTR</th>
<th>FTR Tool</th>
</tr>
</thead>
<tbody>
<tr>
<td>Elapsed Time</td>
<td>7</td>
<td>5</td>
</tr>
<tr>
<td>Number of proposals raised</td>
<td>9</td>
<td>23</td>
</tr>
<tr>
<td>Number of sessions</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>Number of arguments placed by reviewers and authors</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>Number of changed positions towards consensus</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
Apparently, the simplicity of the FTR Tool and the short training applied before the sessions were sufficient to accomplish the reviews without relevant problems. We noticed however, that the arguments were not always used as such. For instance, several comments were inserted as if they were arguments. Comments such as “I agree with the item above” are not real arguments but appeared as such. This may impact the above comparisons. Only about 40% of the arguments written by the team members were actually identified as real arguments. These difficulties are in line with those reported by Borges et al. on the use of a structured argumentation model.
5.3. Questionnaires
The participants in the evaluation (those that used the FTR Tool) were requested to complete an open questionnaire about the tool. The answers to the questionnaire seem to indicate, in a general way, that the tool supports the dynamics of the collaboration-conflict model and promotes collaboration in FTR. A summary of advantages and disadvantages pointed by the participants is presented in Tables 4 and 5. Table 4 refers to the standard FTR while Table 5 refers to the FTR Tool usage.
Table 4. Advantages and disadvantages of standard FTR
<table>
<thead>
<tr>
<th>Positive Aspects</th>
<th>Negative Aspects</th>
</tr>
</thead>
<tbody>
<tr>
<td>Often a face-to-face meeting is more productive because people have difficulties in expressing themselves in writing - verbally is easier and faster - especially when it comes to a discussion where reasoning through arguments is necessary.</td>
<td>Meetings are not always possible because of the geographical distribution and the time involved. Also, difficulties documenting the meeting: what has been discussed and what has been resolved.</td>
</tr>
<tr>
<td></td>
<td>Negotiation is difficult because there is no consolidation of ideas in written format.</td>
</tr>
<tr>
<td></td>
<td>Poor use of time in meetings where one</td>
</tr>
</tbody>
</table>
The participants pointed out the following main advantages: (1) the tool was easy to learn; (2) had clear rules; (3) managed knowledge evenly; and (4) preserved the argumentation history. Also, the support to asynchronous and geographically distributed meetings was identified as an advantage, though the face-to-face meetings ease understanding and offers more expressiveness.
Table 5. Advantages and disadvantages of using the FTR tool
<table>
<thead>
<tr>
<th>Positive Aspects</th>
<th>Negative Aspects</th>
</tr>
</thead>
<tbody>
<tr>
<td>Outcomes in one place, where all participants have access.</td>
<td>Not enough space to type an idea.</td>
</tr>
<tr>
<td>Participants may interact at the meetings at different times and without the need of being in the same place. It is a solution to the problem of dispersed teams.</td>
<td>The tool was unavailable during certain periods.</td>
</tr>
<tr>
<td>Negotiation was much faster because there was a consolidation of the points raised.</td>
<td>As each person works on her/his own schedule, sometimes the question you insert stays without any response for some time.</td>
</tr>
<tr>
<td>Less likely to shift the meeting focus.</td>
<td>May hinder understanding, if the written communication is not clear.</td>
</tr>
<tr>
<td>Uptake of irrelevant items that may contribute to a more clean and clear documentation that facilitates the next steps.</td>
<td></td>
</tr>
<tr>
<td>The validations records of each participant are stored and this avoids</td>
<td></td>
</tr>
</tbody>
</table>
It is important to emphasize that the participants, in general, valued the capability to register all arguments in an organized way. This seems to ease changing positions towards consensus and enriches the FTR as a whole.
One of the main problems identified in the standard FTR is that the review repeats itself several times without necessity, only because the reviewers’ recommendations seem to be unnoticed by the authors. The FTR Tool was seen by the participants as a mechanism to overcome this problem.
Overall, the comments produced by the participants indicate that the desired objectives for the FTR are coherent with the collaboration-conflict model: supporting a continuum of collaboration and negotiation. The participants in the experiment indeed recommended the adoption of the FTR Tool in their organization.
6. Discussion
In Table 6 we summarize the various concepts involved in the collaboration-conflict model. The major distinctions concern the behavioral context, expected attitudes, computational support, incentives, contextual factors, quality criteria, and data elements. As the paper shows, the integration of such disparate concepts requires bridging information sharing with negotiation and argumentation. This was implemented in the FTR Tool through one common data element: argument. Arguments
contribute at the same time to build a common understanding of the problem and to bring forward different views and conflicting positions.
Looking at this focal point, it was striking to find out that in the evaluation the FTR Tool generated more arguments than the standard FTR. The responses to the questionnaires also emphasize that the participants considered arguments as important meeting elements, allowing them to reason and consolidate the discussion while avoiding bad communication.
<table>
<thead>
<tr>
<th>Behavioral context</th>
<th>Collaboration</th>
<th>Conflict</th>
</tr>
</thead>
<tbody>
<tr>
<td>Expected attitudes</td>
<td>Collaborative</td>
<td>Conflicitive</td>
</tr>
<tr>
<td>Computational support</td>
<td>Information sharing</td>
<td>Negotiation, argumentation</td>
</tr>
<tr>
<td>Incentives</td>
<td>Awareness, visualization, knowledge exchange, contribution, consensus</td>
<td>Conflict detection, preferences, settlement spaces, detection of malicious acts</td>
</tr>
<tr>
<td>Contextual factors</td>
<td>Expertise, involvement</td>
<td>Level of conflict, status differences</td>
</tr>
<tr>
<td>Quality criteria</td>
<td>Efficiency, contribution</td>
<td>Efficiency, flexibility</td>
</tr>
<tr>
<td>Data elements implemented by the FTR tool</td>
<td>Proposals, arguments, decision, final scores</td>
<td>Positions, arguments, scores</td>
</tr>
</tbody>
</table>
Table 6. The collaboration-conflict model.
Although these results are promising, we are aware that we need more experiments to claim that computer support may increase argumentation, and also that argumentation may increase the quality of review meetings. The qualitative insights obtained with the experiments show that such causal relationships should be further investigated, and also indicate that the increased number of arguments might be related to the increased number of proposals. One possible interpretation is that the collaboration-conflict model might promote constructive conflict, since conflicting positions may be accompanied with alternative proposals. This interpretation is inline with the observations from D’Astous et al., although in that case no technology support was used.
We also observe that the validation in a real-world setting provided some insights not possible when using students or artificial settings, but on the other hand limited the number of samples and the level of control over the evaluation setting. In any case we are aware that we need more sessions with more variety of artifacts and participants to consolidate our conclusions.
We finally note that of the three quality criteria considered by our study - efficiency, flexibility and contribution - only contribution seems to have been affected by the FTR Tool. Future experiments may be set up to
evaluate the impact of technological incentives specifically focused on improving efficiency and flexibility.
7. Conclusions
We developed a collaboration-conflict model for software review meetings and a tool to support it. The collaboration-conflict model brings together very distinct behavioral contexts, expected attitudes, computational support, incentives and quality criteria. The research allowed us to understand how to bring together these elements. The developed collaboration-conflict process integrates information sharing with negotiation and argumentation, linking various data elements such as decisions, proposals, positions, arguments and scores.
Two evaluation sessions were carried out in a telecommunications company that adopts a global software development strategy. We compared the results of four review meetings, two using the standard review process and two using the tool described in this paper. The quantitative and qualitative results provide some insights about the reviewers’ behavior facing the somewhat contradictory process of collaboration-conflict.
First, the evaluation data indicates that the developed tool is capable to support software reviews with some advantages over the standard process. Second, the evaluation shows that the pivot data element in the collaboration-conflict model is the argument, as it integrates the collaborative and the negotiated aspects of the tool functionality.
And third, the evaluation also allowed us to identify some points that may constitute subject for future research. An important challenge is to evaluate the causal relationships between technology use, increased argumentation and improved decision quality. Another challenge is validating the positive relationships between proposals and arguments, delineating what may be designated as “constructive conflict”. And finally, this research also gives some positive indications towards extending the collaboration-conflict model to other collaborative tools and applications.
The research described in this paper contributes to information systems development in two main ways. One is raising attention, articulating the problems and describing a technical solution for integrating collaborative and conflicting behaviors in computational support. The other one is contributing to the development of technology-designed incentive mechanisms, which influence human behavior and process quality through
information structures that promote positive and discourage negative values.
Acknowledgments
This work was partially supported by grants No. 479374/2007-4 and 567220/2008-7 from CNPq (Brazil), and grant PTDC/EIA/102875/2008 from FCT (Portugal).
References
|
{"Source-Url": "http://staff.sim.vuw.ac.nz/pedro-antunes/wp-content/uploads/ijitdm-srm-12.pdf", "len_cl100k_base": 8451, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 71640, "total-output-tokens": 11523, "length": "2e13", "weborganizer": {"__label__adult": 0.0003192424774169922, "__label__art_design": 0.0006241798400878906, "__label__crime_law": 0.0003132820129394531, "__label__education_jobs": 0.00476837158203125, "__label__entertainment": 8.285045623779297e-05, "__label__fashion_beauty": 0.000152587890625, "__label__finance_business": 0.0006070137023925781, "__label__food_dining": 0.0003178119659423828, "__label__games": 0.0006875991821289062, "__label__hardware": 0.0004773139953613281, "__label__health": 0.0003604888916015625, "__label__history": 0.00025010108947753906, "__label__home_hobbies": 0.00010138750076293944, "__label__industrial": 0.00029540061950683594, "__label__literature": 0.0003952980041503906, "__label__politics": 0.0002455711364746094, "__label__religion": 0.00030303001403808594, "__label__science_tech": 0.0189666748046875, "__label__social_life": 0.00019538402557373047, "__label__software": 0.0140380859375, "__label__software_dev": 0.95556640625, "__label__sports_fitness": 0.00024056434631347656, "__label__transportation": 0.0003421306610107422, "__label__travel": 0.000209808349609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52122, 0.02812]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52122, 0.19877]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52122, 0.89946]], "google_gemma-3-12b-it_contains_pii": [[0, 1884, false], [1884, 4558, null], [4558, 6731, null], [6731, 9180, null], [9180, 11349, null], [11349, 13680, null], [13680, 15344, null], [15344, 16916, null], [16916, 19229, null], [19229, 21478, null], [21478, 24383, null], [24383, 26492, null], [26492, 27209, null], [27209, 27752, null], [27752, 30133, null], [30133, 32562, null], [32562, 35498, null], [35498, 37424, null], [37424, 38751, null], [38751, 41468, null], [41468, 43907, null], [43907, 46577, null], [46577, 49993, null], [49993, 52122, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1884, true], [1884, 4558, null], [4558, 6731, null], [6731, 9180, null], [9180, 11349, null], [11349, 13680, null], [13680, 15344, null], [15344, 16916, null], [16916, 19229, null], [19229, 21478, null], [21478, 24383, null], [24383, 26492, null], [26492, 27209, null], [27209, 27752, null], [27752, 30133, null], [30133, 32562, null], [32562, 35498, null], [35498, 37424, null], [37424, 38751, null], [38751, 41468, null], [41468, 43907, null], [43907, 46577, null], [46577, 49993, null], [49993, 52122, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52122, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52122, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52122, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52122, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52122, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52122, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52122, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52122, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52122, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52122, null]], "pdf_page_numbers": [[0, 1884, 1], [1884, 4558, 2], [4558, 6731, 3], [6731, 9180, 4], [9180, 11349, 5], [11349, 13680, 6], [13680, 15344, 7], [15344, 16916, 8], [16916, 19229, 9], [19229, 21478, 10], [21478, 24383, 11], [24383, 26492, 12], [26492, 27209, 13], [27209, 27752, 14], [27752, 30133, 15], [30133, 32562, 16], [32562, 35498, 17], [35498, 37424, 18], [37424, 38751, 19], [38751, 41468, 20], [41468, 43907, 21], [43907, 46577, 22], [46577, 49993, 23], [49993, 52122, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52122, 0.16279]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
556b95c45014f54bbe3a4fa9b9fdd28f295c4dba
|
CommonRuby - Feature #13581
Syntax sugar for method reference
05/19/2017 12:44 PM - americodls (Americo Duarte)
Status: Closed
Priority: Normal
Assignee:
Target version:
Description
Some another programming languages (even Java, in version 8) has a cool way to refer a method as a reference.
I wrote some examples here: https://gist.github.com/americodls/20981b2864d166ee8d231904303f24b
I miss this thing in ruby.
I would thinking if is possible some like this:
roots = [1, 4, 9].map &Math.method(:sqrt)
Could be like this:
roots = [1, 4, 9].map Math->method
What do you guys thinking about it?
Related issues:
Related to Ruby master - Feature #16275: Revert ":" syntax Closed
Is duplicate of Ruby master - Feature #12125: Proposal: Shorthand operator fo... Open
History
#1 - 05/19/2017 01:37 PM - Hanmac (Hans Mackowiak)
that might collide with -> {} a lambda syntax
so i think the chances are low that ruby gets something like that.
so ruby probably does think its "Math(->sqrt)" and thats a Syntax error.
#2 - 05/19/2017 02:16 PM - americodls (Americo Duarte)
The -> was just a suggestion... Could be another symbol or combination of symbols like Math->:sqrt, Math=>:sqrt, Math+:sqrt, Math$:sqrt, Math:>sqrt, etc
I just think could have another way to write it than not a method calling with a symbol as argument, something more concise and expressive.
Hanmac (Hans Mackowiak) wrote:
that might collide with -> {} a lambda syntax
so i think the chances are low that ruby gets something like that.
so ruby probably does think its "Math(->sqrt)" and thats a Syntax error.
#3 - 05/19/2017 02:43 PM - Hanmac (Hans Mackowiak)
my current thinking is if that short form should do symbol support.
if Math->sym should be supported, than the normal variant need to be Math->:sqrt
i currently think if we also could abuse the call method.
so we could harness "Math.(sqrt)" into something.
#4 - 05/20/2017 12:27 AM - americodls (Americo Duarte)
Why the version with symbol (Math->:sqrt) needs to be supported?
#5 - 08/31/2017 06:31 AM - matz (Yukihiro Matsumoto)
I am for adding syntax sugar for method reference. But I don't like proposed syntax (e.g. ->).
Any other idea?
Matz.
#6 - 08/31/2017 08:48 AM - nobu (Nobuyoshi Nakada)
obj.method
#7 - 08/31/2017 10:10 AM - Hanmac (Hans Mackowiak)
nobu (Nobuyoshi Nakada) wrote:
obj.method
i am not sure about that:
obj\method
is already valid ruby code, so i am not sure
PS: when using "&obj.method(:symbol)" should that be optimized if able?
#8 - 08/31/2017 12:10 PM - zverok (Victor Shepelev)
I am for adding syntax sugar for method reference. But I don't like proposed syntax (e.g. ->).
Any other idea?
In my pet projects, I often alias method as m. It is readable enough, short enough and easy to remember, once you've seen it:
roots = [1, 4, 9].map(&Math.sqrt)
$w[foo bar baz].each(&m.puts)
..., and, if introduced into language core, can be easily backported to earlier versions (through something like backports or polyfill gem).
Another weird-ish idea, following the first one closely, is .:, which (for me) looks guessable:
[1,2,3].map(&Math.sqrt)
$w[foo bar baz].each(&.puts)
(BTW, object-less form should also be considered, when weighing proposals, don't you think?)
#9 - 08/31/2017 12:30 PM - k0kubun (Takashi Kokubun)
Another idea: &obj:method
It just puts receiver between & and : from existing one. I'm not sure it conflicts with existing syntax or not but I feel it's consistent with &foo syntax.
#10 - 08/31/2017 12:56 PM - Hanmac (Hans Mackowiak)
k0kubun (Takashi Kokubun) wrote:
Another idea: &obj:method
hm i like that idea, but think that might be a bit conflicting, that depends on if obj is an object or not?
obj = Object.new
obj:method #=> syntax error
but notice that (xyz not known)
xyz:method #=> undefined method 'xyz' for main
it thinks that it's xyz(:method)
Oh, I'm so sad to hear `obj:method` (without &`) is already valid. I still have hope to have it only when it's put with & in the last of arguments because that case is not valid for now.
```ruby
def obj(method); method; end
```
```ruby
obj(&:method)
```
```ruby
from /home/k0kubun/.rbenv/versions/2.4.1/bin/irb:11:in `<main>'
``` SyntaxError: (irb):3: syntax error, unexpected tSYMBEG, expecting keyword_do or '{' or '('
```
```
obj(&:method)
```
```ruby
from /home/k0kubun/.rbenv/versions/2.4.1/bin/irb:11:in `<main>'
``` SyntaxError: (irb):4: syntax error, unexpected tLABEL
```
```
obj(&:method)
```
```ruby
from /home/k0kubun/.rbenv/versions/2.4.1/bin/irb:11:in `<main>'
``` SyntaxError: (irb):5: syntax error, unexpected tSYMBEG, expecting keyword_do or '{' or '('
```
```
obj &:obj:method
```
```ruby
from /home/k0kubun/.rbenv/versions/2.4.1/bin/irb:11:in `<main>'
``` SyntaxError: (irb):6: syntax error, unexpected tLABEL
```
```
obj &:obj:method
```
I've never seen a person who writes `a:b` as `a:b` (without space or parenthesis before `:`) and personally I don't expect `&a:b` to be `&a(b).`
---
### #12 - 08/31/2017 02:27 PM - nobu (Nobuyoshi Nakada)
Hanmac (Hans Mackowiak) wrote:
> I am not sure about that:
```ruby
obj.
.method
```
is already valid ruby code, so I am not sure
It's different at all.
My example is a token `.`, do not split.
PS: when using `"&obj.method(:symbol)"` should that be optimized if able?
Probably, but it's not possible to guarantee that it will return the method object.
---
### #13 - 08/31/2017 02:31 PM - nobu (Nobuyoshi Nakada)
k0kubun (Takashi Kokubun) wrote:
Another idea: `&obj:method`
Consider more complex example, `(&obj.some.method(args)).method`, not only a simple receiver.
`&` and `:` are separated too far.
---
### #14 - 08/31/2017 02:51 PM - Hanmac (Hans Mackowiak)
my idea for optimising `&obj.method(:symbol)` is that it already creates a proc (object) without going over a Method object, i don't know if that would be an good idea for that.
---
### #15 - 09/01/2017 12:56 AM - mrkn (Kenta Murata)
How about `obj.[method_name]` for the syntax sugar of `obj.method(:method_name)`?
---
### #16 - 09/01/2017 06:15 AM - zverok (Victor Shepelev)
Another pretty unholy idea: resemble the way Ruby docs document the methods:
Yes, it conflicts with comment syntax, but in fact, no sane person should join the comment sign immediately after non-space symbol.
And we already have parsing ambiguities like this:
- `foo -bar` → `foo(-bar);
- `foo - bar` → `foo.-(bar).
#17 - 09/01/2017 06:36 AM - tom_dalling (Tom Dalling)
What about triple colon `::`?
`: is for looking up constants, so it kind of makes sense that `::` is for looking up methods.
#18 - 09/01/2017 08:30 AM - tom-lord (Tom Lord)
Consider the following:
```ruby
def get_method(sym, object = Object)
object.send(:method, sym)
end
```
This allows us to write code like:
```ruby
[1, 4, 9].map {&Math::sqrt}
[1, 2, 3].each {&::puts}
```
:: is for looking up constants, so it kind of makes sense that `::` is for looking up methods.
#19 - 09/29/2017 12:19 AM - k0kubun (Takashi Kokubun)
- Is duplicate of Feature #12125: Proposal: Shorthand operator for Object#method added
#20 - 10/19/2017 01:46 AM - americodls (Americo Duarte)
matz (Yukihiro Matsumoto) wrote:
```
I am for adding syntax sugar for method reference. But I don't like proposed syntax (e.g. ->).
Any other idea?
Matz.
```
What do you think about: Kernel.puts, Kernel->puts, Kernel:>puts ?
#21 - 01/24/2018 09:49 AM - zverok (Victor Shepelev)
Just to push this forward, here are all the syntaxes from this and duplicate #12125.
I am taking Math.sqrt and puts as examples:
- `map{&Math->sqrt} (and just each{&->.puts} probably?) -- Matz is explicitly against it;
- `map{&Math::.sqrt} (not sure about puts);
- `map{&Math.m(.sqrt)}, each{&m.puts}) (just shortening, no language syntax change)
- `map{&Math:.sqrt}, each{&.puts}
- `map{&Math:sqrt}, each{&self.puts}
- `map{&Math#sqrt}, each{&puts}) (it was my proposal, "just how it looks in docs", but I should reconsider: in docs it is Math::sqrt, in fact)
- `map{&Math:::.sqrt}, each{&::puts)
- `map{&->:sqrt, Math}), each{&->:puts})
- `several by Papierkorb (Stefan Merettig):
- `map{&Math->sqrt}, each{&->puts}) (nobu [Nobuyoshi Nakada]: conflicts with existing syntax)
- `map{&Math:<sqrt>}, each{&<puts}) (nobu [Nobuyoshi Nakada]: conflicts with existing syntax)
map(&Math\>sqrt), each(&\>puts)
Can please somebody lift this question to next Developer Meeting and make an Executive Decision?..
I personally really like :: (called "tetrus operator" in other ticket).
**#22 - 01/24/2018 02:48 PM - dsferreira (Daniel Ferreira)**
zverok (Victor Shepelev) wrote:
map(&Math\>sqrt), each(&\>puts) (too confusable with Elixir-like pipe, probably)
I tend to agree with that.
In fact I was hoping to get the pipe operator introduced in ruby. (Created an issue with that in mind: [https://bugs.ruby-lang.org/issues/14392](https://bugs.ruby-lang.org/issues/14392)).
Taking that pipe operator example as an operator that sends messages maybe we can use it to send a message to the class or module. Like this:
```ruby
:sqrt |> Math
```
Since Math is not a method the message would extract the method from the object.
If this is not practical maybe we could invert the operator and do:
```
Math <| :sqrt
```
---
**#23 - 01/25/2018 12:23 PM - nobu (Nobuyoshi Nakada)**
zverok (Victor Shepelev) wrote:
- map(&Math.\>sqrt), each(&.\>puts)
This conflicts with existing syntax.
- map(&Math\&\>sqrt), each(&\&\>puts) (nobu (Nobuyoshi Nakada): conflicts with existing syntax)
Not this.
---
**#24 - 01/25/2018 12:31 PM - zverok (Victor Shepelev)**
nobu (Nobuyoshi Nakada) Thanks, I've updated the list.
Can you please add it to next Developer Meeting's agenda?..
**#25 - 02/01/2018 10:05 PM - mpapis (Michal Papis)**
Not sure it's worth it - but while we are at this thinking of a shorthand one of the proposals &Math&\>sqrt made me think if it could be automated and all the iterators could recognize methods and we could avoid the initial & to this map(Math...) - skipped the operator as it's not clear what's preferred.
My two cents to the operator - what about ! and @ would they conflict? (yes for the Kernel, not sure about Class'es).
**#26 - 02/01/2018 11:13 PM - nobu (Nobuyoshi Nakada)**
mpapis (Michal Papis) wrote:
Not sure it's worth it - but while we are at this thinking of a shorthand one of the proposals &Math\&\>sqrt made me think if it could be automated and all the iterators could recognize methods and we could avoid the initial & to this map(Math...) - skipped the operator as it's not clear what's preferred.
It can't distinguish passing block and passing Method object.
- My two cents to the operator - what about ! and @ would they conflict? (yes for the Kernel, not sure about Class'es).
Math!\sqrt and Math@\sqrt?
They are valid syntax now.
You can try with ruby -c.
```
$ ruby -wc -e 'Math!sqrt'
Syntax OK
```
They are interpreted as a method call without a receiver, Math(sqrt) and Math(@sqrt) respectively.
#27 - 02/02/2018 01:51 AM - duerst (Martin Dürst)
nobu (Nobuyoshi Nakada) wrote:
Syntax OK
They are interpreted as a method call without a receiver, Math(sqrt) and Math(@sqrt) respectively.
This may be just me, but I think this kind of syntax without spaces could (or even should) be depreciated.
That doesn't mean that for the purpose of this issue, I like ! or @. But they might be usable for other purposes.
#28 - 02/02/2018 05:33 AM - nobu (Nobuyoshi Nakada)
duerst (Martin Dürst) wrote:
This may be just me, but I think this kind of syntax without spaces could (or even should) be depreciated.
It would hurt code-golfers and quine-makers. :)
#29 - 02/02/2018 08:20 AM - Hanmac (Hans Mackowiak)
Question for nobu (Nobuyoshi Nakada):
i don't know about the rubyVM but can xyz(&method(:symbol)) or xyz(&obj.method(:symbol)) be optimized like xyz(&:symbol) is? the one with the Symbol was optimized to not create a Proc object if not needed.
can be something similar with the Method object? or if not overwritten maybe not even creating a Method object at all?
#30 - 02/02/2018 08:27 AM - nobu (Nobuyoshi Nakada)
Hanmac (Hans Mackowiak) wrote:
They have different meanings all.
xyz(&method(:symbol)) == xyz {x | symbol(x)}
xyz(&obj.method(:symbol)) == xyz {x | obj.symbol(x)}
xyz(&:symbol) == xyz {x | x.symbol}
#31 - 02/02/2018 08:37 AM - Hanmac (Hans Mackowiak)
nobu (Nobuyoshi Nakada) wrote:
Hanmac (Hans Mackowiak) wrote:
They have different meanings all.
xyz(&method(:symbol)) == xyz {x | symbol(x)}
xyz(&obj.method(:symbol)) == xyz {x | obj.symbol(x)}
xyz(&:symbol) == xyz {x | x.symbol}
i know they are different meanings,
i was just wondering if they can be optimized for the VM too, to make them run faster if able like with not creating extra ruby objects if not needed
#32 - 02/02/2018 10:57 AM - nobu (Nobuyoshi Nakada)
Hanmac (Hans Mackowiak) wrote:
i know they are different meanings,
Sorry, misread.
i was just wondering if they can be optimized for the VM too, to make them run faster if able like with not creating extra ruby objects if not needed
Once a Method as the result of & to passing a block is allowed, optimization of calling a Method object might be possible by adding a new block handler type for methods.
No "extra ruby objects" would be the next step.
#33 - 02/04/2018 08:10 PM - landongrindheim (Landon Grindheim)
* map(&Math->sqrt) (and just each(&->puts) probably?) -- Matz is explicitly against it;
Is map(&Math.&(:sqrt) viable? Perhaps it would be confused with the safe navigation operator.
#34 - 02/04/2018 09:12 PM - jeremyevans0 (Jeremy Evans)
landongrindheim (Landon Grindheim) wrote:
Is map(&Math.&(:sqrt) viable? Perhaps it would be confused with the safe navigation operator.
No. It would break backward compatibility, as that is currently interpreted as:
Math.&(:sqrt).to_proc
That code currently works if you do:
def Math.&(x) proc{|a| a} end
#35 - 02/05/2018 05:39 PM - sevos (Artur Roszczyk)
Have we ruled out map(&obj:method) syntax? Intuitively I find it consistent with Symbol#to_proc
class Foo
def initialize(array)
@array = array
end
def call
@array
.map(&Math:sqrt)
.map(&self:magic)
.map(&self:boo(2.0))
.map(&:ceil)
.each(&Kernel:puts)
end
private
def magic(x)
x ** 3
end
def boo(a, b)
a / b
end
end
Alternatively, I am for triple colon:
class Foo
def initialize(array)
@array = array
end
def call
@array
.map(&:Math::sqrt)
.map(&self::magic)
.map(&::boo(2.0)) # with triple colon we could omit self
.map(&::ceil)
.each(&Kernel::puts)
end
private
def magic(x)
x ** 3
end
def boo(a, b)
a / b
end
end
This could translate to:
```ruby
class Foo
def initialize(array)
@array = array
end
def call
@array
.map { |x| Math.public_send(:sqrt, x) }
.map { |x| self.send(:magic, x) }
.map { |x| self.send(:boo, x, 2.0) }
.map { |x| x.ceil }
.each { |x| Kernel.public_send(:puts, x) }
end
private
def magic(x)
x ** 3
end
def boo(a, b)
a / b
end
end
```
Applying additional arguments (aka .map(&self:boo(2.0)) is just a proposal - I am not sure if this should be even possible - Symbol#to_proc does not allow that.
Another interesting question which we need to answer is:
**What visibility scope should be used when making a method call?**
Given the syntax receiver::method or receiver:::method if the receiver is self then we should expand this syntax sugar to send() allowing accessing the private interface of the current object (which is not the item from the iterator - we would use symbol to proc in that case). However, if the receiver is something else, we should expand to public_send to disallow accessing private methods of other objects.
Just my two cents ;)
Cheers,
Artur
---
#36 - 02/06/2018 06:14 AM - nobu (Nobuyoshi Nakada)
Note that &: isn't a single operator, but combination of & prefix + a part of :symbol. So it should be valid syntax solely without &.
#37 - 02/06/2018 08:19 AM - sevos (Artur Roszczyk)
After a while I am becoming a bigger fan of the triple colon operator. We could implement a class MethodSelector for handling the logic and the operator would be expected to return an instance of the class:
```ruby
class MethodSelector
def initialize(b, receiver, m)
end
```
@binding = b
@receiver = receiver
@method = m
end
def call(*args, **kwargs, &block)
...
end
def to_proc
if @binding.eval("self") == @receiver
proc do |*args, **kwargs, &block|
if kwargs.empty?
@receiver.send(@method, *args, &block)
else
@receiver.send(@method, *args, **kwargs, &block)
end
end
else
proc do |*args, **kwargs, &block|
if kwargs.empty?
@receiver.public_send(@method, *args, &block)
else
@receiver.public_send(@method, *args, **kwargs, &block)
end
end
end
end
# Instead of MS() method we should implement :::: operator (taking two arguments):
# receiver:::method expands to MS(binding, receiver, method)
class Object
def MS(b, receiver, m)
MethodSelector.new(b, receiver, m)
end
end
# Example usage
> MS(binding, Kernel, :puts) # the triple colon operator should expand current binding by default
=> #<MethodSelector:0x007fdba89bd0a8 @binding=#<Binding:0x007fdba89bd0d0>, @receiver=Kernel, @method=:puts>
> [1,2,3].each(&MS(binding, Kernel, :puts))
1
2
3
=> nil
There is still the question how to enable meta-programming with triple colon operator.
Imagine the situation when the method name is dynamic. How to distinguish it from the symbol?
method = :puts
Kernel:::puts
Kernel::method
The only logical solution to me is the presence of the fourth colon for the symbol:
method = :puts
Kernel:::puts # evaluates as Kernel:::(:puts)
Kernel:::method # evaluates as Kernel:::(method)
What are your thoughts?
#38 - 02/06/2018 08:39 AM - phluid61 (Matthew Kerwin)
sevos (Artur Roszczyk) wrote:
What are your thoughts?
I have two:
1. As always: do we really need more magic symbols? I like reading Ruby because it's not Perl.
2. If you're adding new syntax, you don't have to be clever. Symbol has ".#{x}" so why not propose y:"#{x}"? Not that it adds much over y.method(x)
#39 - 02/06/2018 10:52 AM - sevos (Artur Roszczyk)
phluid61 (Matthew Kerwin) wrote:
sevos (Artur Roszczyk) wrote:
What are your thoughts?
I have two:
1. As always: do we really need more magic symbols? I like reading Ruby because it's not Perl.
I totally agree, but we still like -> {} syntax for lambdas, right? Let's play with ideas, maybe we can find something nice for a method selector, too ;)
2. If you're adding new syntax, you don't have to be clever. Symbol has ".#{x}" so why not propose y:"#{x}"? Not that it adds much over y.method(x)
You're totally right! I was looking at triple-colon as an operator taking two arguments. Your idea of looking at this as a double-colon lookup operator is actually great, look:
```
irb(main):006:0> a :: :to_s
SyntaxError: (irb):6: syntax error, unexpected tSYMBEG, expecting '('
```
```
from/Users/sevos/.rbenv/versions/2.4.0/bin/irb:11:in '<main>'
```
We already have a lookup operator which takes object, constant on the left side and method name or constant on the right side. Maybe it would be possible to support symbols on the right side and expand them to method(:symbol) call? I would like just to emphasise again the need of respecting the method-to-be-called visibility depending on the current binding.
#40 - 02/06/2018 01:15 PM - phluid61 (Matthew Kerwin)
sevos (Artur Roszczyk) wrote:
phluid61 (Matthew Kerwin) wrote:
sevos (Artur Roszczyk) wrote:
What are your thoughts?
I have two:
1. As always: do we really need more magic symbols? I like reading Ruby because it's not Perl.
I totally agree, but we still like -> {} syntax for lambdas, right? Let's play with ideas, maybe we can find something nice for a method selector, too ;)
Personally I hate it, and never use it. I like my code to say lambda when I make a Lambda, and (more often) proc when I make a Proc.
1. If you're adding new syntax, you don't have to be clever. Symbol has ".#{x}" so why not propose y:"#{x}"? Not that it adds much over y.method(x)
You're totally right! I was looking at triple-colon as an operator taking two arguments. Your idea of looking at this as a double-colon lookup operator is actually great, [...]
Although I doubt I'd ever use it.
(Personally I find the idea of partially applied methods more useful.)
Cheers
#41 - 02/12/2018 11:54 PM - cben (Beni Cherniavsky-Paskin)
A non-syntax idea: could Math.method.sqrt look significantly nicer than Math.method(:sqrt)? That is, .method without args would return a magic object that for any message returns the bound method of that name.
Naive implementation (some names don't work, eg. Math.method.method_missing, and doesn't take visibility and refinements into account):
class Methods < BasicObject
def initialize(obj)
@obj = obj
end
def method_missing(name)
@obj.method(name)
end
def responds_to_missing?(name)
true
end
end
module MethodWithoutArgs
def method(*args)
if args.empty?
Methods.new(self)
else
super
end
end
end
Object.prepend(MethodWithoutArgs)
[14] pry(main)> [1, 4, 9].map(&:Math.method.sqrt).each(&:method.puts)
1.0
2.0
3.0
=> [1.0, 2.0, 3.0]
BTW, what about refinements? Is .method(:foo) ignorant about them? A benefit of a real syntax might be that it could "see" methods from lexically active refinements.
As for syntax, I'm wondering if something postfix might work. The reason I say this is I'm thinking of both &: and this as shorthands for writing out a block.
&: can be read locally, it roughly "stands for" |x| x:
[1, 2, 3].map{|x| x.to_s}
[1, 2, 3].map(&:to_s)
And with a bound method, we want to elide the argument declaration, plus the call that comes after the receiver.message:
[1, 4, 9].map{|a| Math.sqrt(a)}.each{|a| puts(a)} # half baked idea
[1, 4, 9].map(Math.sqrt).each(&puts) # quarter baked
OK, actually there is a more generic feature I'd love much more than a syntax for bound methods: implicit notation for block arg:
[1, 2, 3].map{|x| x.to_s}
[1, 2, 3].map{|_|}
[1, 4, 9].map{|x| Math.sqrt(x)}.each{|x| puts(x)}
[1, 4, 9].map{|*|}.each{|*|}
[1, 2, 3].map{|x| 1/x}
[1, 2, 3].map{1_/}
(I don't think _ is possible, just an example)
The part I love most about this is that \([\ldots]\) does not become \((&\ldots)!\). This doesn't easily handle multiple args, like bound methods do, but I think one arg is sweet spot for such shorthand anyway.
- I've tried prototyping this once by defining `Kernel._` that would look in caller frame, but didn't find any way to access arg in a block that didn't declare any `\{args\}`.
**#42 - 02/14/2018 12:52 AM - sevos (Artur Roszczyk)**
cben (Beni Cherniavsky-Paskin) wrote:
> A non-syntax idea: could Math.method.sqrt look significantly nicer than Math.method(:sqrt)?
> That is, method without args would return a magic object that for any message returns the bound method of that name.
Hey Beni! Thank you! This is a great idea! For my taste it looks significantly better!
Also, I took the liberty of implementing a prototype gem and I've added my two cents:
- method visibility check
- arguments currying
You can check it out on Github
**#43 - 03/26/2018 09:43 PM - pvande (Pieter van de Bruggen)**
As a blue sky alternative, why not consider something like this:
```
[1, 2, 3].map(&> { Math.sqrt })
```
# or perhaps more simply
```
[1, 2, 3].map(&> Math.sqrt)
```
**Pros:**
- It's clean
- It's readable
- It reads like passing a block
- Specifically, passing a lambda as a block
- It also reads something like the Elixir pipe operator
**Cons:**
- It's only looks like Ruby code
- Is `x.map(&> { a.b.c.d ; Math.sqrt })` valid? (I hope not.)
- Is `x.map(&> do; Math.sqrt; end)` valid? (I hope not.)
- Is `x.map(&> { begin; rescue; end })` valid? (I hope not.)
- Is `x.map(&> { Math.sqrt if x })` valid? (I hope not.)
- Is `x.map(&> { Math.sqrt rescue nil })` valid? (I hope not.)
- It's not actually a shorthand for Object#method.
- The two clearest implementation paths are as a "macro", and as an "atypical evaluation context"
- The macro approach simply transforms the code into a "proper" block, and passes the argument implicitly (see below for an example)
- The other approach requires the interpreter to all non-terminal method calls, then produce a block invoking the terminal call with the yielded argument (see below for an example)
Despite the "oddness" of this particular syntax, I think the clarity of expression is very much inline with the Ruby ideals, and is therefore worth discussing.
**Macro Example**
```
fn(&> { a.b.c.d })
# => fn() { |__x| a.b.c.d(__x) })
```
**Atypical Evaluation Example**
```
fn(&> { a.b.c.d })
# => __target = a.b.c; fn() { |__x| __target.d(__x) })
```
**#44 - 04/23/2018 07:00 AM - baweaver (Brandon Weaver)**
sevos (Artur Roszczyk) wrote:
After a while I am becoming a bigger fan of the triple colon operator. We could implement a class MethodSelector for handling the logic and the operator would be expected to return an instance of the class:
class MethodSelector
def initialize(b, receiver, m)
@binding = b
@receiver = receiver
@method = m
end
def call(*args, **kwargs, &block)
# ...
end
def to_proc
if @binding.eval("self") == @receiver
proc do |*args, **kwargs, &block|
if kwargs.empty?
@receiver.send(@method, *args, &block)
else
@receiver.send(@method, *args, **kwargs, &block)
end
end
else
proc do |*args, **kwargs, &block|
if kwargs.empty?
@receiver.public_send(@method, *args, &block)
else
@receiver.public_send(@method, *args, **kwargs, &block)
end
end
end
end
end
end
# Instead of MS() method we should implement :::: operator (taking two arguments):
# receiver:::method expands to MS(binding, receiver, method)
class Object
def MS(b, receiver, m)
MethodSelector.new(b, receiver, m)
end
end
# Example usage
> MS(binding, Kernel, :puts) # the triple colon operator should expand current binding by default
=> #<MethodSelector:0x007fdba89bd0a8 @binding=#<Binding:0x007fdba89bd0d0>, @receiver=Kernel, @method=:puts>
> [1, 2, 3].each(&MS(binding, Kernel, :puts))
1
2
3
=> nil
There is still the question how to enable meta-programming with triple colon operator.
Imagine the situation when the method name is dynamic. How to distinguish it from the symbol?
method = :puts
Kernel:::puts
Kernel:::method
The only logical solution to me is the presence of the fourth colon for the symbol:
method = :puts
Kernel:::puts # evaluates as Kernel:::(:puts)
Kernel:::method # evaluates as Kernel:::(method)
What are your thoughts?
I like the idea of triple-colon as well for succinctness.
Most of the alternative in terms of succinctness would involve discussing the no-parens syntax which is likely a non-starter for obvious compatibility reasons.
That is, unless there's a way to stop paren-free method calling in the presence of an & or to_proc:
[1,2,3].map(&Math.sqrt)
...but that feels like an excess of black magic in the parser and would likely be prone to bugs.
I really do like what Scala does with underscores:
[1,2,3].map(_ * 10)
...but I also understand that that would also be hugely breaking in terms of syntax as well.
Really though I think given what Ruby already does the triple-colon is the cleanest route for now.
#45 - 05/17/2018 06:45 AM - matz (Yukihiro Matsumoto)
Out of ruby-core:85038 candidates, .: looks best to me (followed by :::).
Let me consider it for a while.
Matz.
#46 - 05/23/2018 09:47 AM - cben (Beni Cherniavsky-Paskin)
Matz, could you give your thoughts on obj::method (with lowercase on right side) syntax?
AFAICT it's synonym to obj.method?
Does anybody use :: for method calls in practice?
I understand breaking compatibility can't be justified here, but I'm just curious — do you see this syntax as a feature to be preserved, or something to be avoided that might be (slowly) deprecated one day?
#47 - 11/10/2018 05:01 AM - ianks (Ian Ker-Seymer)
matz (Yukihiro Matsumoto) wrote:
Out of ruby-core:85038 candidates, .: looks best to me (followed by :::).
Let me consider it for a while.
Matz.
Would love to see see either one implemented at this point. Not having an ergonomic way for functional composition is a pain.
My dream:
```
slug = title
.then(&:strip)
.then(&:downcase)
.then(I18n:::transliterate)
.then(Utils:::hyphenate)
.then(Validations:::check_length)
.tap(PostLogger:::info)
```
#48 - 11/12/2018 07:20 AM - nobu (Nobuyoshi Nakada)
https://github.com/nobu/ruby/tree/feature/13581-methref_op
#49 - 11/12/2018 10:29 AM - zverok (Victor Shepelev)
nobu (Nobuyoshi Nakada) Awesome!
Am I correct that receiver-less call, like something.map(&:puts), will be impossible?
Is it a voluntary design decision, or limitation of what can be parsed?
#50 - 11/14/2018 02:46 PM - shevegen (Robert A. Heller)
I think :: is better than :: but it is not very pretty either. I have no better suggestion, though. Good syntax is not easy to use. :( I agree with the functionality by the way.
#51 - 11/15/2018 01:50 AM - nobu (Nobuyoshi Nakada)
zverok (Victor Shepelev) wrote:
Am I correct that receiver-less call, like something.map(&:puts), will be impossible?
To allow that, .:puts should be a sole expression by itself. However ruby has the line continuation for “fluent interface” (like https://bugs.ruby-lang.org/issues/13581#change-74822), for a decade. If .: will be introduced, I think it should obey that syntax too, and allowing it without the receiver feels confusing.
Is it a voluntary design decision, or limitation of what can be parsed?
It is easy to add a receiver-less syntax.
https://github.com/ruby/ruby/commit/2307713962c3610f4e034e328af37b19be5c7c45
#52 - 11/15/2018 08:42 AM - zverok (Víctor Shepelev)
nobu (Nobuyoshi Nakada)
If .: will be introduced, I think it should obey that syntax too, and allowing it without the receiver feels confusing.
Can you please show some example of confusing statements? I can’t think of any from the top of my head, it seems that (if the parser can handle it), the context for .:something and .:something is always clearly different.
I am concerned about receiver-less version because in our current codebase we found this idiom to be particularly useful:
```ruby
# in a large data-processing class
some_input
.compact
.map(&:method(:process_item)) # it is private method of current class
.reject(&method(:spoiled?))
.tap(&method(:pp)) # temp debugging statement
.group_by(&method(:grouping_criterion))
.yield_self(&method(:postprocess))
# which I’d be really happy to see as
some_input
.compact
.map(&:process_item)
.reject(&:spoiled?)
.tap(&:pp)
.group_by(&:grouping_criterion)
.then(&:postprocess)
```
Having to explicitly state map(&self,.process_item) is much less desirable.
#53 - 11/15/2018 10:02 AM - AlexWayfer (Alexander Popov)
zverok (Víctor Shepelev) wrote:
nobu (Nobuyoshi Nakada)
If .: will be introduced, I think it should obey that syntax too, and allowing it without the receiver feels confusing.
Can you please show some example of confusing statements? I can’t think of any from the top of my head, it seems that (if the parser can handle it), the context for .:something and .:something is always clearly different.
I am concerned about receiver-less version because in our current codebase we found this idiom to be particularly useful:
```ruby
# in a large data-processing class
some_input
.compact
.map(&:method(:process_item)) # it is private method of current class
.reject(&method(:spoiled?))
.tap(&method(:pp)) # temp debugging statement
.group_by(&method(:grouping_criterion))
.yield_self(&method(:postprocess))
# which I’d be really happy to see as
some_input
.compact
.map(&:process_item)
.reject(&:spoiled?)
.tap(&:pp)
.group_by(&:grouping_criterion)
.then(&:postprocess)
```
Having to explicitly state `map(&self.:process_item)` is much less desirable.
Just an opinion:
```ruby
processed = some_input
.compact
.map { |element| ProcessingItem.new(element) } # or `.map(&:ProcessingItem.method(:new))`.
.reject(&:spoiled?)
.each(&:pp) # temp debugging statement
.group_by{&:grouping_criterion}
postprocess processed
```
Or you can even use `ProcessingItem` collection class. With its own state and behavior. Instead of bunch of private methods in a processing class with the same (collection) argument.
#54 - 11/15/2018 10:09 AM - zverok (Victor Shepelev)
Just an opinion
It is funny how when you show some imaginary code, quick-written just to illustrate the point of a language feature, people tend to discuss this code's design approaches instead.
Yes, obviously, in the situation like "several consecutive, algorithmically complex methods working on the same collection" it is typically wise to just wrap collection items. But that has absolutely nothing to do with the point of my example.
#55 - 11/15/2018 10:26 AM - AlexWayfer (Alexander Popov)
zverok (Victor Shepelev) wrote:
Just an opinion
It is funny how when you show some imaginary code, quick-written just to illustrate the point of a language feature, people tend to discuss this code's design approaches instead.
Yes, obviously, in the situation like "several consecutive, algorithmically complex methods working on the same collection" it is typically wise to just wrap collection items. But that has absolutely nothing to do with the point of my example.
I just try to use good (existing) sides of a language. Ruby already has nice Symbol#to_proc syntax. And yes, different "syntax sugars" allow to use different design approaches (classes vs functions, for example). But sometimes they also allow to create bad practices. I'm not sure, and I'm not against syntax sugar, but... I like solutions for real problems, not imaginary. With knowledge of solutions for imaginary problems we can create these problems later, without resolving them with other approaches.
#56 - 11/15/2018 10:58 AM - zverok (Victor Shepelev)
I like solutions for real problems, not imaginary.
The map(&method(:local_method)) or yield_self(&method(:local_method)) pattern is absolutely real and very useful. My point was, we have plenty in our current codebase (and no, they are not "you just need to wrap collection items", my example was exaggerated for the illustrative purposes).
And I can definitely say it brings value for code design and clarity, and will be even more so with map(&.:local_method) syntax.
#57 - 11/26/2018 08:10 PM - shevegen (Robert A. Heiler)
The map(&:method(:local_method)) or yield_self(&:method(:local_method)) pattern is absolutely real
and very useful.
Everyone who suggests something tends to think of it as useful and often pretty too. :-)
I personally have become too picky perhaps. I already don't like yield_self much at all; "then" is better than yield_self though.
In ruby it is possible to avoid a lot of things and still end up writing pretty code that is fun.
My point was, we have plenty in our current codebase
I think a pretty syntax is great, but it is not necessarily ruby's primary design goal. It is difficult to say because I am not matz :) but I think matz has said before that ruby should be fun to use; and solve real problems; and help people. If you look at the safe navigation operator, this is a good example, in my opinion. Someone had a use case and suggested a solution to his problem, and matz agreed with the problem description and added the safe navigation operator.
There are lots of things I myself have not yet used, including the safe navigation operator. I also have not yet used the -> lambda variant. I no longer use @@variables either. I am sure other people have different means how to use and approach ruby. What I do use, and that has been somewhat newish (well not yet 10 years old I think), was the new additional hash syntax, since it has a net benefit - less to type, e. g.:
```ruby
:foo => :bar
```
versus
```ruby
foo: :bar
```
Especially if I have lots of entries, the second variant is indeed easier to use for me.
And I can definitely say it brings value for code design and clarity, and will be even more so with map(&.local_method) syntax.
Using this reasoning we can say that EVERY addition is GOOD and USEFUL because we have more features. But it is not quite like that. More and more features make a language harder to use and more complicated too. Some features also look strange. For example, I personally dislike the map(&.local_method). I don't have an alternative suggestion that is nice to read, but it is hard for me to visually distinguish between what is going on there.
Ultimately it is up to matz how he wants to change ruby, but I really don't feel that in all the discussions the trade off was or is really worth it. This may be up to personal preferences or habits, yes - but ... I don't know.
When I look at things such as map(&.local_method) then the original suggestion in the issue of:
```ruby
roots = [1, 4, 9].map Math->method
```
becomes a LOT cleaner and easier to read. ;)
(Only thing that puts me off is that -> is used in e. g. LPC to invoke methods; I much prefer just the single "." notation. Do also note that I agree that we should be able to pass arguments to .map(&) but ... I don't know. It's visually not very pleasing to my eyes. ∂)
#58 - 12/09/2018 10:19 PM - shuber (Sean Huber)
matz (Yukihiro Matsumoto) wrote:
Out of ruby-core:85038 candidates, :. looks best to me (followed by :::).
Let me consider it for a while.
Matz.
matz (Yukihiro Matsumoto) and nobu (Nobuyoshi Nakada)
What do you guys think about this alternative syntax? (working proof of concept: https://github.com/LendingHome/pipe_operator)
```ruby
-9.pipe { abs | Math.sqrt | to_i }
```
```ruby
#=> 3
```
```ruby
[9, 64].map(&Math.|.sqrt)
```
04/18/2021
There's nothing really new/special here - it's just a block of expressions like any other Ruby DSL and the pipe `|` operator has been around for decades!
The https://github.com/LendingHome/pipe_operator README contains many more examples and the implementation details - I would love to hear your thoughts!
Thanks,
Sean Huber
---
matz (Yukihiro Matsumoto) This pipe_operator syntax actually looks very similar to https://github.com/matz/streem!
url.pipe { URI.parse | Net::HTTP.get | JSON.parse }
"https://api.github.com/repos/ruby/ruby".pipe do
URI.parse
Net::HTTP.get
JSON.parse.fetch("stargazers_count")
then { |n| "Ruby has #{n} stars" }
Kernel.puts
end
#=> Ruby has 15120 stars
https://github.com/LendingHome/piper_operator
#62 - 12/13/2018 04:58 PM - shuber (Sean Huber)
Also discussing pipe operators in https://bugs.ruby-lang.org/issues/14392#note-26
#63 - 12/13/2018 05:51 PM - headius (Charles Nutter)
Triple : doesn't parse right now and has some synergy with constant references:
objc::foo #=> obj.method(:foo)
#64 - 12/19/2018 11:30 AM - zverok (Victor Shepelev)
From developer's meeting log:
2.7 or later
knu: Introducing "..." (as in Lua) would allow for writing this way: ary.each { puts(...) }
Matz: Allowing omission of "self" sounds like a bad idea because that makes each(&.puts) and each(&.puts) look very similar but act so
differently.
The last Matz's notice makes a lot of sense for me, so I withdraw my petition for self-less operator :)
#65 - 12/31/2018 03:00 PM - nobu (Nobuyoshi Nakada)
- Status changed from Open to Closed
Applied in changeset trunk|r66667.
Method reference operator
Introduce the new operator for method reference, ::
[Feature #12125] [Feature #13581]
[EXPERIMENTAL]
#66 - 11/20/2019 10:21 AM - znz (Kazuhiro NISHIYAMA)
- Related to Feature #16275: Revert `.:` syntax added
|
{"Source-Url": "https://bugs.ruby-lang.org/issues/13581.pdf", "len_cl100k_base": 11474, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 52998, "total-output-tokens": 13380, "length": "2e13", "weborganizer": {"__label__adult": 0.0004677772521972656, "__label__art_design": 0.0004031658172607422, "__label__crime_law": 0.00024080276489257812, "__label__education_jobs": 0.0004901885986328125, "__label__entertainment": 6.22868537902832e-05, "__label__fashion_beauty": 0.00013720989227294922, "__label__finance_business": 0.00015163421630859375, "__label__food_dining": 0.000392913818359375, "__label__games": 0.0004048347473144531, "__label__hardware": 0.00028514862060546875, "__label__health": 0.00021946430206298828, "__label__history": 0.00014925003051757812, "__label__home_hobbies": 7.092952728271484e-05, "__label__industrial": 0.0002086162567138672, "__label__literature": 0.00021719932556152344, "__label__politics": 0.0002167224884033203, "__label__religion": 0.00039076805114746094, "__label__science_tech": 0.0006456375122070312, "__label__social_life": 0.00010961294174194336, "__label__software": 0.00389862060546875, "__label__software_dev": 0.990234375, "__label__sports_fitness": 0.0002498626708984375, "__label__transportation": 0.0002658367156982422, "__label__travel": 0.00019168853759765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40043, 0.04156]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40043, 0.04779]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40043, 0.84789]], "google_gemma-3-12b-it_contains_pii": [[0, 2018, false], [2018, 3920, null], [3920, 6230, null], [6230, 8361, null], [8361, 10948, null], [10948, 12900, null], [12900, 14461, null], [14461, 16372, null], [16372, 18120, null], [18120, 20444, null], [20444, 22434, null], [22434, 25274, null], [25274, 27248, null], [27248, 29534, null], [29534, 32205, null], [32205, 35224, null], [35224, 38187, null], [38187, 38636, null], [38636, 40043, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2018, true], [2018, 3920, null], [3920, 6230, null], [6230, 8361, null], [8361, 10948, null], [10948, 12900, null], [12900, 14461, null], [14461, 16372, null], [16372, 18120, null], [18120, 20444, null], [20444, 22434, null], [22434, 25274, null], [25274, 27248, null], [27248, 29534, null], [29534, 32205, null], [32205, 35224, null], [35224, 38187, null], [38187, 38636, null], [38636, 40043, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 40043, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40043, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40043, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40043, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40043, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40043, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40043, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40043, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40043, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40043, null]], "pdf_page_numbers": [[0, 2018, 1], [2018, 3920, 2], [3920, 6230, 3], [6230, 8361, 4], [8361, 10948, 5], [10948, 12900, 6], [12900, 14461, 7], [14461, 16372, 8], [16372, 18120, 9], [18120, 20444, 10], [20444, 22434, 11], [22434, 25274, 12], [25274, 27248, 13], [27248, 29534, 14], [29534, 32205, 15], [32205, 35224, 16], [35224, 38187, 17], [38187, 38636, 18], [38636, 40043, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40043, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
731946335ec1b1a9b24d7bfeb9c920f14363f49f
|
Exploring Architecture-Based Reliability Analysis of Current Multi-Layered Web Applications
Euler H. Marinho, Alysson A. Mendonça
Department of Computer Science
Federal University of Minas Gerais
Belo Horizonte, Brazil – 31270-010
Email: eulerhm@dcc.ufmg.br, alymenbr@yahoo.com.br
Genaína N. Rodrigues, Vander Alves and Rodrigo Bonifácio
Department of Computer Science
University of Brasília, Brasília, DF - Brazil
Email: genain@ciic.unb.br, valves@unb.br, rbonifacio@ciic.unb.br
Abstract—Web applications architecture evolved from simple web sites add-ons to complex n-layer applications. However, identifying components in this domain is usually a subjective task, as web applications typically comprise web pages, scripts, forms, applets, servlets or simply web objects. As a result of this subjectivity, a component-based life-cycle might reflect on inconsistencies not only on a clear definition of web components, but also on the process development itself. In addition, it is hard to identify which components are more critical according to specific tasks, such that developers could spend more time to improve their design. That quality certainly comprises reliability, availability and security, summing up as dependability attributes. The application of architecture-based reliability analysis techniques in various domains have contributed to solve those problems. However, very little has been done towards the assessment of current web applications in a real-life setting. In this work, we explore the feasibility to apply an architecture-based reliability analysis method in a real-life web application. Our preliminary results show the potential use of this method for the web applications domain, with a considerably accuracy.
I. INTRODUCTION
Architecture-based software reliability assessment has gained increased attention on the last decade [19], but it is usually considered a challenging task. Firstly, the impact of faults on system reliability is directly dependent on how often each part of the system is executed. This is associated to the usage profile of a system, which is usually known after the system is deployed in production environment. Secondly, reliability of software architecture depends on reliability of individual software components. Obtaining failure of the components represents a challenge both on early stages and on late stages of the development. On early stages, the degree of abstraction could be very high, which may lead to imprecise estimation of component reliability. On late stages this is also challenging. For instance, component failure of industrial-strength software is mostly associated to detailed information derived from fine grained bug reports fields, tailored with respect to software implementation and not necessarily to the software architecture. This would not be a problem if commonly present, in practice, the well known phenomena of architectural drift (or erosion) [30]. As a result, there is commonly unclear failure association between the low abstraction level of the software modules implementation and the architecture components. On top of that, complex interactions among components (or between components and the execution environment) may be unknown.
Assessing reliability in web applications is even harder. First, there is a lack of literature work dedicated to objectively and accurately assessing current web applications reliability from the perspective of software architecture. However, from the perspective of existing web logs there are some emerging work [32]. In addition, web applications usually comprise components implemented in different languages (markup, scripting, query, programming languages) and from different sources [25], for example, legacy components, COTS, and so on. According to a conceptual view of this application class proposed by Di Lucca et al. [26], the definition of component for this kind of application is subjective, where it could be Web Pages, Scripts, Forms, Applets, Servlets or simply Web Objects. Besides, with the advent of Rich Internet Applications (RIA) using technologies as Ajax [14], there has been a change in the architecture style of web applications [27]. As a result of this subjectivity, a component-based life-cycle might lead to inconsistencies not only on a clear definition of web application components, but also on the software development process itself. In particular, it is hard to identify critical components to which more project resources developers should allocate to improve the quality of the application. To sum up, the software dependability attributes [3], e.g., reliability, availability and security, may be seriously compromised.
Accordingly, in this paper we explore the suitability of accurately analyzing industrial-strength web applications through our approach to reliability analysis from the perspective of the software architecture [34]. In particular, here we apply our approach by considering a simplified, but common layered-architecture decomposition. Based on this simplification, we are able to assign the fail history of a project to a small, but comprehensive, number of components, even though the implementation might comprise assets implemented in different languages. We perform an evaluation on a real-life web application, developed with current web technologies. This application organizes the activities related to supply contracting of the State Government of Minas Gerais, aiming at increasing the transparency of contract processes.
Similar to our previous work, here we use the Prism Model Checker [23] to perform simulations, in such a way that we can reason about the reliability contribution of the individual components with respect to the whole system reliability. Nevertheless, here we use historical data of a real web application to estimate the components’ reliability and transitions’ probability. The remainder of this work is organized as follows. In Section II we briefly introduce our technique for architecture-based reliability analysis and the tool environment for the quantitative evaluation. In Section III we describe the industrial-strength web system to conduct our evaluation. The quantitative validation of this results is presented in Section IV. Finally, related work is discussed in Section V, whereas in Section VI we conclude and highlight our future directions.
II. Method
Our method for reliability and sensitivity analysis consists of firstly identifying ways to convert the UML models (particularly the Activity and Sequence Diagrams) into the modeling language used in Prism, while preserving the semantic expressed in the UML models. The purpose of modeling in Prism as a probabilistic model checking tool is to make a qualitative and quantitative dependability analysis of the model using sound techniques. Secondly, we annotate the Prism model with variables denoting component reliability and transition probabilities in order to quantify its dependability through PCTL properties. Next section introduces the Prism model checker, whereas the conversion process from UML to Prism models is described in Section II-B. The analysis process is domain-specific and is described in the context of the example in Section III.
A. The PRISM Model Checker
The reliability analysis of the web system in this work is accomplished through a probabilistic model checking tool called Prism [23]. The reason for choosing Prism as the probabilistic state-based model checker in this study was twofold: (1) tool maturity, considering the number of successful case studies that have used the tool [24]; and (2) the richness of the tool environment, which is able to represent various kinds of probabilistic models and their evaluations, as we briefly explain below.
Prism is a tool for formal modeling and analysis of systems which exhibit random or probabilistic behavior. It supports three types of probabilistic models: Discrete-Time Markov chains (DTMCs), Continuous-Time Markov Chains (CTMCs) and Markov Decision Processes (MDPs), plus extensions of these models such as the ability to specify costs and rewards. The tool has three environments: (1) one for system modeling in Prism language, a state-based language derived from the Reactive Modules formalism; (2) one for model simulation; and (3) one for property specification, which uses temporal logic such as the Probabilistic Computational Tree Logic (PCTL) [17], [6] and includes extensions for quantitative specifications and expressions of costs and rewards.
In the modeling environment we model processes, which in Prism are called modules. A model in Prism is composed of a number of modules. Each module has a set of finite-ranged variables, which define the possible states of that module. The final model is the synthesis of all modules through parallel composition. Each module is composed of a set of guarded commands. For example, a DTMC command in Prism takes the form:
\[ \text{[action]} < \text{guard} >\rightarrow< \text{probability} >:< \text{update} >; \]
The guard is a predicate over all variables in the model, and once it is satisfied, the module will make a transition with a certain probability to the update state, where \(0 \leq \text{probability} \leq 1\). The action can be used in order to tag a command that is synchronized with other commands in the same or in a different module. When there is no action label, the command will run asynchronously. An example of a simple Prism command is the following:
\[ \text{[notify]} s = 0 \rightarrow \text{Rel} : (s' = 1) + (1 - \text{Rel}) : (s' = 2); \]
which states that, if \(s\) is 0, it makes the transition to state 1, with probability \(\text{Rel}\), or to state 2, with probability \(1 - \text{Rel}\). Also, note that we use the action notify in order to synchronize with other commands labeled with the same action in accordance to Communicating Sequential Process (CSP) rules [18], which synchronizes all commands with the same action label, once their guard conditions are satisfied.
Once the system modeling is completed using the Prism specification language, Prism reads and parses the language statements and constructs the corresponding probabilistic model, in this case a DTMC (although it can also be used for CTMC and MDPs as well). Prism computes the set of all states reachable from the initial state and checks the model for deadlocks. In the simulation environment, Prism allows the visualization of possible execution traces of the synthesized model.
Another feature in Prism is the ability to specify properties of the probabilistic model. Properties in Prism are expressed using temporal logic (e.g., in PCTL). Prism also performs model checking, determining the quantitative value of each specified property and whether the model satisfies it. For example, in dependability analysis a very useful property is the reachability property expressed as: \(P =? [F(\Phi)]\), which computes the probability of the system to eventually reach a state that satisfies \(\Phi\). In reliability analysis, reachability is an important property to satisfy. It guarantees that the final successful state of the system will be reached, in this case, regardless of the time elapsed to reach it from the initial state.
B. Conversion from UML to Prism Models
This section presents how we map UML models, particularly Activity and Sequence diagrams, to Prism. Initially, each action node becomes a module in Prism. The conversion process consists in first building a state machine model that represents each node from the Activity Diagram (AD). The AD is composed of two types of nodes: decision
and action. Action nodes represent execution scenarios, each represented as a sequence diagram (SD). Each message in the SD is represented as a transition between states, annotated with the probability in which the component performs that message successfully.
Specifically, the conversion from the action node to a PRISM module is accomplished as follows. For each component service execution in the SD that models the action node, there is a corresponding state in the state machine, while the messages exchanged between components in the SD are represented as labeled transitions between states. Therefore, for each state of the state machine model there is a component $C$ processing a service. For this reason, each state is associated with the component reliability $R_C$, i.e., the probability of successful execution of the service, while the probability of the service failure $(1 − R_C)$ is represented by a transition to state $E$. By failure we simply mean that an error is propagated to the service interface causing deviation from correct to incorrect service [3]. This failure can be classified in various modes: domain, detectability, consistency and severity. Our major property of interest in our modeling is the probability of reaching the end state of the system successfully, i.e. reachability.
Differently, decision nodes represent choices and each of their outgoing transitions is represented as a transition between states in the state machine model, annotated with the probability of transitions between scenarios. Precisely, the conversion from decision node to a PRISM module consists of representing the probabilities of transition $PT_{ij}$ from a decision node $i$ to an action node $j$. This information would be normally derived from a system usage profile [28]. Therefore, each decision node in the AD is represented as a state and each outgoing transition in the state machine is labeled with $PT_{ij}$. Thus, from decision node $i$, the sum of the probabilities $PT_{ij}$ for all successor action nodes $j$ is equal to one.
Once the PRISM model is finished, thus consisting of a set of modules representing state machines, the synthesis of the final stochastic model follows the CSP synchronization rules and the probabilistic compositions for the corresponding Markov model of choice. In our case, we focus on DTMC class as it describes a more direct association with state machine models as each state and their respective sets of transitions can be consistently represented as a command line in DTMC, which is not usually true for other kinds of markov models. Therefore, we annotate the models with discrete values of their corresponding probabilities of both component reliability and transition probabilities to the states.
III. EXAMPLE IN THE WEB DOMAIN
We performed the evaluation of the method described previously by analyzing an example in the Web domain. This section describes the example system, which is then evaluated in the following section. Seplag is the target system of our evaluation. It is a portal developed by the Laboratory of System and Software Engineering, which is also CMM level II certified, hosted at the Computer Science Department of the Federal University of Minas Gerais. This industrial-strength web system organizes the activities related to supply contracting of the State Government, aiming to increase the transparency of contract processes. Its size is about 143 KLOC and 1950 classes.
In this study, we apply the methodology described in Section II to estimate the web application reliability and validate our results through a process that comprises the following steps: (1) define a high-level view of the architecture into layers, (2) reverse engineer of the system by defining the scenario executions as method calls between system classes, (3) define the appropriate abstraction level to represent the system components, (4) associate components to their respective architecture layer, (5) estimate reliability of those components with respect to a particular failure behavior, (6) estimate of transition probabilities (in our case between scenarios of execution based on usage profiles), (7) apply the methodology presented in Section II and (8) measure the actual system reliability (according to component failure classification) to compare with estimated results in task 7.
Initially, our knowledge about the system maturity was restricted to a log file, hosted by Bugzilla [7], containing bugs identified during alpha test activities mainly. The log file also had several descriptions of bugs. For our study, we only consider failures reported as critical or blocking, namely failures that prevent the execution of the system.
For tasks 1 to 4, there were no clear component identification and neither a clear component-based architecture view of the chosen web application. However, the software documentation provided high level decisions to divide system in layers. That first obstacle made difficult our work and took us a great deal of time to identify coarse grained components that we could consistently associate with the bug file results. To accomplish this task, it was necessary the assistance of a software engineer in order to identify the components and their respective failure data. We then filtered out those failure of interest, i.e. critical or blocking, in order to carry out task 5. The estimation of transition probabilities in task 6 was based on information extracted from the database as well as an educated guess of the engineers for the frequency of process queries and update. Finally, task 8 was accomplished based on the same bug file used in task 5.
In the next section, we present the outcome of tasks 1 to 6 and an excerpt of our analysis model for the reliability estimation.
A. Software Requirements and Architecture
Considering the magnitude of Seplag, we decided to focus our analysis on the core of that application, described by the Use Case Management of Purchase Process Records. It encompasses one main flow, six subflows and eleven alternative flows. The use case is represented by four alternative flows regarding the CRUD operations (Create, Read, Update, Delete) related to the purchase processes. According to Synergia engineers, these CRUD operations represent the main
functionality of the Seplag Portal, since they are used by most other functionalities and by the largest number of actors.
According to Seplag software engineers, the web application was developed using high level architectural decisions, based on the MVC style. In Figure 1, we depict an abstract view of the web application architecture with the identified components, composed by five layers and their respective dependencies.
In Seplag’s architecture, the boundary layer gathers the classes implementing the user interface; the control layer groups the classes acting as access point to the use case functions; the entity layer gathers the classes implementing information units potentially reusable inside a domain and often persistent; the persistent layer groups the classes that convert data between the entity layer and the physical mechanisms of persistence, as databases or files; the system layer gathers useful classes implementing common services used by other classes.
Following the first steps of our reliability analysis methodology, we first devise the activity diagram of the Seplag Portal, depicted on Figure 2. That diagram encompasses the CRUD operations previously mentioned.
For each action node in the activity diagram in Figure 2, we shall create a sequence diagram in order to model the components interaction within that action. The interactions between the components of each flow were obtained through the method calls analyzed by a software engineer. However, as a clear view of the components was not provided, many classes were identified in this stage as potential candidates for coarse grained components. We then realized that the coupling between the identified components was quite high.
After extensive refinements supported by Seplag software engineers, we managed to find out those classes that would fit as the desired components in a higher abstraction level. The components are represented in Figure 1 inside their respective layers.
An example of a sequence diagram illustrating the realization of an activity is depicted on Figure 3. That diagram presents the components involved in the Query of Purchase Process use case flow. The initial letter of the component name denotes the layer to which the component is related to. Three other sequence diagrams (not shown here for brevity) correspond to the other activities of the activity diagram and represent the steps of the system for creating, deleting and updating the processes in Seplag.
B. Component Reliabilities and Transition Probabilities
Once the components were identified, we needed to associate the components to the log failure data. Once again, we had to make use of Seplag software engineers to make that association. The log file comprehends the period of one year. The main fields of log file are: Bug# is the bug identification; Status and Resolution define and track the life cycle of the bug and Severity classifies the impact of the bug in the overall functioning of the system. In this work, we have considered only blocking or critical failure types as they have a significant impact of the system functioning, as presented on Table I.
The log failure data was then used to calculate the component reliabilities. In order to carry this out, we selected the total number of failures reported as critical or blocking and the number of system executions. The first step consists on associating the failures to their respective components. Compared to the whole process of Seplag’s reliability analysis,
this step was considered one of the most cumbersome, as we needed to match the failure description to their respective components. However, the failure description was either a high level message of the failed services or the raised exceptions. Mostly through the former, which required the assistance of the system specialists to match the service failure message to their related implementation classes. In order to estimate the component reliability, we use the following formula:
\[ R = 1 - \frac{F}{N} \]
(1)
where \( F \) is the number of component failure, considering blocking or critical failures types that were fixed after detected. \( N \) is the considered number of times the components were executed, following the frequency of the CRUD operations registered on the database. Following Equation 1, the Seplag reliability is 99.6%.
The calculation of the transition probabilities was straightforward, though. The transition probabilities was calculated on the basis of the number of database records, as the context of our experiment focused on the CRUD operations. Finally, after the component reliabilities and the transition probabilities calculated, we could then apply our approach to architecture-based reliability analysis. The results are depicted on Table II. Note that the transitions in Table II follow the names of the transitions in Figure 2.
The last step on the analysis methodology is the conversion of the models in to PRISM in order to proceed with the software reliability estimation. The overall procedure of the conversion process for our study is as follows. There are four action nodes in the activity diagram, each represented by a sequence diagram, and two decision nodes: one for the query-creation decision and one for the query-delete-update decision. Each action and decision node are modeled as a state machine, following the conversion process described in Section II.
The resulting PRISM module of the Query of Purchase Process modeled as the state machine in Figure 4 is presented in Listing 1.
<table>
<thead>
<tr>
<th>Failure Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Blocking</td>
<td>Blocks the continuity of work of the testing and maintenance team.</td>
</tr>
<tr>
<td>Critical</td>
<td>Causes inconsistencies or data losses, affecting the functioning of various parts of system.</td>
</tr>
<tr>
<td>Normal</td>
<td>General system errors that do not render the system outage.</td>
</tr>
<tr>
<td>Minor</td>
<td>Inaccurate messages or errors with very minor or no impact on the system functioning.</td>
</tr>
</tbody>
</table>
### TABLE II
<table>
<thead>
<tr>
<th>Component Reliabilities</th>
<th>Transition Probabilities</th>
</tr>
</thead>
<tbody>
<tr>
<td>B.Action</td>
<td>( T_{\text{query}} )</td>
</tr>
<tr>
<td>E.DAOQuery</td>
<td>( T_{\text{query-loop}} )</td>
</tr>
<tr>
<td>E.Request</td>
<td>( T_{\text{delete}} )</td>
</tr>
<tr>
<td>E.Request</td>
<td>( T_{\text{update}} )</td>
</tr>
<tr>
<td>E.Request</td>
<td></td>
</tr>
<tr>
<td>E.Request</td>
<td></td>
</tr>
<tr>
<td>E.Request</td>
<td></td>
</tr>
<tr>
<td>E.Request</td>
<td></td>
</tr>
<tr>
<td>E.Request</td>
<td></td>
</tr>
<tr>
<td>E.Request</td>
<td></td>
</tr>
</tbody>
</table>
In Listing 1, we declare the constants for the component reliabilities in the Query state machine from line 2 to 6. Note that the failure state modeled in Figure 4 as state 7 (line 26) in module QueryPurchase. Two
important points of note are as follows. Firstly, lines 19 to 24 in Listing 1 are labeled with `query` and `query_end` actions. This is used for synchronizing actions between modules, allowing the modules to make transitions simultaneously according to their respective guard conditions.
Secondly, guards of these commands may refer to variables from other modules, used to indicate that a module starts only once the boolean condition has been satisfied. Listing 1 illustrates this situation, where the boolean variable `query_flag` in line 25 is used to indicate that any of the modules succeeding the query start only once that variable is true. Therefore, that variable indicates the module reached the end state successfully.
The final model synthesized in PRISM contains 58 states and 128 transitions. In PRISM simulation environment, we make extensive runs of the synthesized model in order to check whether it consistently expresses the behavior modeled in the UML specification of the Seplag web application.

**Fig. 4. State Machine Representation of the Query of Purchase Process**
As previously mentioned, the architectural view of Seplag was restricted to a layered strategy. But a clear component-view of each layer content was not initially provided. As a result, it initially required us an investigation of the implemented code in order to extract the components. On that regard, there were classes with multiple purposes, where we identified a high degree of coupling. This feature made difficult the clear identification of components (modularization). For example, `ControlManagementPurchaseProcessRecord` is related to five other classes: `ActionDeletePurchaseProcess`, `ActionManagementPurchaseProcess`, `ValidatorPurchaseProcess`, `ManagerPurchaseProcess` and `PurchaseProcess`. Then, the aid of software engineers became fundamental to successfully conclude our experiment and particularly filtering out components and their failures types accordingly.
### IV. Qualitative and Quantitative Analysis
#### A. The Qualitative Analysis
This analysis focuses on the effort to apply our reliability analysis technique on the Seplag portal. Major challenges to apply our approach inherents those of architecture-based reliability techniques in general, discussed below.
First of all, as for the system usage profile, we extracted this information directly from the database. This step was facilitated due to the scope of the CRUD operations of the use case. The significance of the extracted information can be reasoned by the fact that all the 88,000 processes in the database of the Seplag system were registered via this CRUD use case.
As for the system components, the portal was implemented through different frameworks such as Struts [2], Spring [35] and Hibernate [31]. Thus, some classes analysed were dependent on their design (e.g. the Action component of Boundary layer stem from the application of Struts). Consequently, the problem of components with heterogenous nature [29] and distinct abstraction levels and granularity was observed during our study.
Following Equation 1, the Seplag reliability is 99.6%. Note that we only consider blocking or critical failure as other bugs would not deviate the system from its correct service execution [3].
The quantitative analysis of Seplag reliability is carried out in two parts. First, we compute the reliability of the software system in PRISM and compare with the actual system reliability. In order to estimate the actual system reliability, we use the same Formula 1, where, in this case, \( F = 21 \) is the number of system failure, considering blocking or critical failures types that were fixed after detected. \( N = 5,156 \) is the considered number of system execution, including test cases. We should point out that the number of system executions were derived from the number of inclusions, exclusions and queries of the processes registered on the database. In particular, the number of queries were an educated guess over the number of processes included and excluded. We assumed that, for each process included and excluded, a query was carried in order to verify the success of their operation.
Following Equation 1, the Seplag reliability is 99.6%. Note that we only consider blocking or critical failure as other bugs would not deviate the system from its correct service execution [3].
We follow the methodology presented in Section II to compute the Seplag model reliability in PRISM. The model consists of obtaining the system reliability as the probability of reaching the terminal state from the initial state [9]. In PRISM, we use the following PCTL formula to obtain the
Seplag system’s reliability:
\[ P = \mathbb{P}[F(\text{end})] \]
(2)
Considering that we have the initial value of the reliability of each component, we obtained an approximate single reliability value for the Seplag system: 98.7%. This result considers the component reliability values and transition probabilities presented in Table II. Comparing the actual system reliability to the one computed using our methodology, the error is roughly 1%, which shows quite a high accuracy between computed and observed result.
One of the most useful ways to use our architecture-based methodology is to explore the PCTL query to compute and plot the values of system reliability obtained from the query by treating each individual component reliability value and transition probabilities as independent variables in order to identify those parts that have a higher impact on the overall system reliability. This is accomplished in the following section.
1) The Sensitivity Analysis: The most useful way to analyze the model and to gain insights into its dependability, mainly reliability, is to compute and plot the values as some parameters are varied, i.e. to perform a sensitivity analysis. Plotting the value of system reliability while varying the reliability of the components will reveal which components have higher impact on the overall system reliability. Therefore, focusing on enhancing the reliability of those components with proper allocation of project resources will be paramount to enhance the reliability of the overall system.
We structure the sensitivity analysis in two parts. Firstly, we consider the general scenario without taking into account any particular system usage profile. Secondly, we conduct sensitivity analysis based on the most frequent AAL system usage profile. For the quantification of the analysis we use the same PCTL reachability statement in Equation IV-B.
According to this statement, we want to obtain the probability to reaching the final end state without failing. We have the reliability of the components as input parameters and vary the reliability one component at a time, from 0 to 100% and fix the other components to 100%. Note that, for this analysis, we only consider the long-run system reliability, i.e. we do not take into account the amount of time it takes for the system to reach its successful end. The outcome of this first part of the sensitivity analysis is depicted in Figure 5.
The Y-Axis in Figure 5 represents the reliability of the system, while the X-axis represents the reliability of a component. The analysis includes six components of the Seplag web system. We order the legend from top to bottom, from the least to the most significant component.
According to the plotted results, the steeper the slope for a component, the more significant is its impact on system reliability. This way, the most critical components are the Control and Process, while the Request and Query are the least critical. Indeed, this can reasoned by the fact that Control and Process are central to the Control and Entity layers respectively. Also, this result shows the Seplag system relies on these two components for this part of the system we analyzed, showing high coupling between these components with the rest of the components. Indeed, as we are analyzing the operations with the process itself, the Process component plays an important role in this system. Therefore, this design decision is consistent with the outcome of this first sensitivity analysis.
The results in this analysis have shown to be potentially useful for the testing team in particular. Their tests are mostly oriented by use-cases execution and does not provide a straight relation between the developed components and their impact on the overall system reliability.
2) Analyzing the Sensitivity of the Transition Probabilities: The second part of our sensitivity analysis aims at focusing on the scenario transitions impact. The mapping of the information of the influential transition is modeled as the outgoing transitions, presented in Figure 2.
We should note that, for the scenario transitions sensitivity analysis we had to use lower values for the components reliability to 90%. Otherwise, the variation of the transition probabilities would not be noticed. Figure 6 depicts the outcome of our analysis. From the mentioned Figure, we notice that \( T_{\text{update}} \) and \( T_{\text{delete}} \) transitions present a steeper curve, compared to \( T_{\text{query}} \).
Considering the CRUD operations, those that play a major role in the communication with the database are indeed the operations of creating, updating and deleting. In our case, the operations of updating and creation were described by the same scenario, according to the development team of Seplag. Therefore, it is natural to conclude that \( T_{\text{update}} \) and \( T_{\text{delete}} \) would be the transitions with higher impact on the system reliability. This outcome is, therefore, consistent with the results obtained through the sensitivity analysis.
C. Threats to Validity
1) Construct Validity: Construct validity concerns establishing correct operational measures for the concepts being studied. According to the methodology (Section II), the PCTL property assessing the system’s reliability relies on the reliability of its components and on the probability of transitions among modules. In the example analyzed (Section III), log failure data was used to calculate component reliabilities, and the transition probabilities was calculated on the basis of the
number of database records. We focused on critical or blocking failures.
2) Internal Validity: Internal validity concerns establishing a causal relationship, whereby certain conditions are shown to lead to other conditions. We noticed that the variation of the reliability of individual components and their probabilities of transition do impact the final results computed using PRISM. In particular, the sensitivity analysis well illustrates this impact and the identified critical components correspond to their key role in the architecture. However, since we carried out an analysis by means of an example, in which control of variables is limited, further empirical assessment is necessary for validating the proposed method.
3) External Validity: External validity concerns establishing the domain to which a study’s findings can be generalized. In particular, the conversion phase could be applied to other domains, whereas the analysis phase is domain specific. DTMC was used for system modeling; despite assuming discrete-time probability input values, we noticed that the model parametrization in PRISM and the action annotations used to synchronize PRISM modules could be applicable in other contexts. Indeed, in previous work we showed the applicability of the method in the Ambient-Assisted Living domain [34]. Nevertheless, we are aware that the current study was based only on one single system—albeit an industrial-strength one—and that there is a myriad of more complex systems in different domains. At the moment we are planning to conduct further empirical studies to investigate the generalization of the applicability of this approach.
4) Conclusion Validity: Conclusion validity concerns whether it is possible to draw correct conclusions from the results, e.g., reliability of the results. First, according to Section IV-B, the method is sound within the accuracy of 1%. Second, in particular sensitivity analysis showed that improving component reliability has a positive impact on system reliability and also to which components system reliability is more sensitive (e.g., Process and Control). This is consistent with the design of placing such components at the core of the system’s architecture. Nevertheless, since this is an exploratory case study, in which control of variables is limited, further empirical studies should be carried out to further assess these claims.
5) Repeatability: Repeatability concerns demonstrating that the operations conducted in a study can be repeated with the same results. We expect that replications of our study should offer results similar to ours. Indeed, the characteristics of the specific system architecture may differ from the one used in the current study, but the underlying reference architecture should remain unchanged.
V. RELATED WORK
Software architecture provides a means to achieve the system qualities over the software life cycle [4]. Nevertheless, the architecture, by itself, is unable to achieve qualities [4]. Methods for software architecture evaluation such as SAAM [21] or ATAM [20] analyze it to show the satisfaction of certain properties. An analysis of these methods was performed in [10].
The identification of architecture components was approached by DiLucca et al. through a reverse engineering technique to reconstruct UML diagrams providing distinct views of the web applications [26]. Belletini et al. use class and state diagrams obtained through static and dynamic web application analysis to support the test case generation [5].
From the perspective of existing web logs there are some emerging results [32]. However, the focus of their analysis is not on the higher abstract level of software components. Some approaches to reliability assessment of web applications do exist [1]. However, they differ from our work in various points: the accuracy of the model estimation, consideration to current web development technologies and identification of critical components.
There are several work by Ghezzi group related to probabilistic model checking for reliability analysis [11], [13], [12], [15]. Most of their work have focused on run-time/dynamic analysis, including monitoring requirement properties on their KAMI approach [11] as well as optimization techniques for evaluating the satisfaction of reliability requirements at run time [13]. Particular to the latter, they consider both design and run time verification. A more recent work, KAMI has been weaved into the QoSMOS approach, where a set of tools, including PRISM have been weaved into the realization of a comprehensive dynamic QoS management in service-based systems [8]. Our approach for dependability analysis is complementary to their approach in the sense that our scenario-based requirements approach translated into DTMC models at design time and therefore, can be straightforwardly integrated. Moreover, in this work we focus on a component-based view of the web system activities (comparatively to services) so that our analysis, particularly the sensitivity analysis, may directly reflect the impact of the components failure on the overall system reliability.
In previous work, Rodrigues et al. applied the architecture-based reliability analysis used in this work on the realm of Ambient-Assisted Living domain [34]. The focus on that work consisted on exploring critical issues on the reliability of AAL system prior to implementation, when no component reliability or system usage is known. In this work, we have focused on
exploring the technique’s accuracy as well as the feasibility of applying the technique on current industrial-strength web applications, where the components clear definition (and respective failure) is not usually provided. Also, Rodrigues et al. [33] proposed a Message Sequence Chart (MSC) based reliability prediction technique and had the technique automated by using the Label Transition System Analyser Tool (LTSA) [22]. Though it was a sound work, the LTSA tool was not originally designed for probabilistic model checking, such as PRISM. On the other hand, PRISM does not provide the scenario-based modeling facilities such as the MSC plugin provided by LTSA [36]. This study explored the potential of the PRISM tool using a model-driven approach to accurately conduct a component-based reliability analysis in the complex realm of developing multi-layered web applications.
VI. CONCLUSION AND FUTURE WORK
The purpose of this work was to explore the feasibility of applying our previous work [34] to the domain of multi-layered web applications, as a real-life case study. Additionally, applying our methodology showed that such analysis can highlight gaps in the software development cycle, particularly on the software architecture definition and the clear component separation. Despite the limitations, the application of our technique showed quite an accurate result for the estimated web application reliability: relative error of roughly 1%, compared to real data. Also, we carried out a sensitivity analysis in this study to identify system components that have the highest impact on the dependability of the web application. The outcome of this analysis may be of great use, for instance, to enhance the quality of tests with respect to prioritizing most critical components.
For future work, we plan to scale the quantitative analysis to various large web systems. One such example could be the analysis of the various other parts that constitute the web application we used in this work. Also, we plan to analyse other specific web technologies (e.g. AJAX) that has changed the traditional web development paradigm from multipage to single page applications. Finally, we also plan to evaluate the potential of our technique to augment the quality of test cases from the sensitivity analysis results, since it reveals those components that are relevant from the system reliability perspective.
ACKNOWLEDGMENT
We would like to acknowledge the Synergia Lab at DCC-UFMG and, particularly, Clarindo Padua without whom none of our work would be realized.
REFERENCES
|
{"Source-Url": "http://www.lbd.dcc.ufmg.br/colecoes/sbcars/2011/006.pdf", "len_cl100k_base": 8459, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 30969, "total-output-tokens": 11140, "length": "2e13", "weborganizer": {"__label__adult": 0.0003178119659423828, "__label__art_design": 0.0004498958587646485, "__label__crime_law": 0.00031495094299316406, "__label__education_jobs": 0.0008358955383300781, "__label__entertainment": 6.753206253051758e-05, "__label__fashion_beauty": 0.00014984607696533203, "__label__finance_business": 0.00020635128021240232, "__label__food_dining": 0.00028586387634277344, "__label__games": 0.0006327629089355469, "__label__hardware": 0.0006866455078125, "__label__health": 0.0004427433013916016, "__label__history": 0.0002586841583251953, "__label__home_hobbies": 7.003545761108398e-05, "__label__industrial": 0.000339508056640625, "__label__literature": 0.0002663135528564453, "__label__politics": 0.0002124309539794922, "__label__religion": 0.0003736019134521485, "__label__science_tech": 0.0239105224609375, "__label__social_life": 8.147954940795898e-05, "__label__software": 0.0067291259765625, "__label__software_dev": 0.96240234375, "__label__sports_fitness": 0.00023245811462402344, "__label__transportation": 0.0004045963287353515, "__label__travel": 0.00019121170043945312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50064, 0.02935]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50064, 0.46102]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50064, 0.91534]], "google_gemma-3-12b-it_contains_pii": [[0, 5535, false], [5535, 11673, null], [11673, 17991, null], [17991, 21520, null], [21520, 25112, null], [25112, 29871, null], [29871, 35470, null], [35470, 40989, null], [40989, 48230, null], [48230, 50064, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5535, true], [5535, 11673, null], [11673, 17991, null], [17991, 21520, null], [21520, 25112, null], [25112, 29871, null], [29871, 35470, null], [35470, 40989, null], [40989, 48230, null], [48230, 50064, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50064, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50064, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50064, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50064, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50064, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50064, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50064, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50064, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50064, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50064, null]], "pdf_page_numbers": [[0, 5535, 1], [5535, 11673, 2], [11673, 17991, 3], [17991, 21520, 4], [21520, 25112, 5], [25112, 29871, 6], [29871, 35470, 7], [35470, 40989, 8], [40989, 48230, 9], [48230, 50064, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50064, 0.10976]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
79548731798bac889aeebe34d0c30bc734ea4ff7
|
NFD: Using Behavior Models to Develop Cross-Platform Network Functions
Hongyi Huang¹, Wenfei Wu¹, Yongchao He¹, Bangwen Deng¹, Ying Zhang³, Yongqiang Xiong², Guo Chen¹, Yong Cui¹, and Peng Cheng²
1 Tsinghua University, 2 Microsoft Research, 3 Facebook, 4 Hunan University
Abstract—NFV ecosystem is flourishing and more and more NF platforms appear, but this makes NF vendors difficult to deliver NFs rapidly to diverse platforms. We propose an NF development framework named NFD for cross-platform NF development. NFD's main idea is to decouple the functional logic from the platform logic—it provides a platform-independent language to program NFs' behavior models, and a compiler with interfaces to develop platform-specific plugins. By enabling a plugin on the compiler, various NF models would be compiled to executables integrated with the target platform. We prototype NFD, build 14 NFs, and support 6 platforms (standard Linux, OpenNetVM, GPU, SGX, DPDK, OpenNF). Our evaluation shows that NFD can save development workload for cross-platform NFs and output valid and performant NFs.
I. INTRODUCTION
The ecosystem of network function virtualization (NFV) has gradually matured over the past few years. Network clients propose the requirement of in-network functionalities; network operators would adopt one of the various NF platforms as the runtime environment or management framework for NFs, e.g., Azure VFP [1], OpenNF [2], E2 [3], LeanNFV [4], AWS Nitro [5], etc.; NF vendors¹ would provide various software NFs; and integration of the platform and NFs would be established to serve network clients (e.g., cloud tenants or enterprise network users).
However, the diversity in NF platforms and NF logic and the long business cycle of NF software development could slow down the NF developers to deliver NFs. On the one hand, there is a huge number of combinations of environments and NFs — an increasing number of NF platforms are proposed for different reasons, e.g., acceleration (DPDK, SR-IOV, AWS Nitro [5]), security (SGX), scalability (Azure VFP [1], OpenNF [2]), and manageability (E2 [3], LeanNFV [4]); and NFs can be highly customized for different network users, e.g., load balancer with blacklisting, or unbalanced load balancer for heterogeneous backends.
On the other hand, developing or porting an NF to a specific platform involves a non-trivial business cycle — developers need to spend efforts understanding NF logic, decoupling and rewriting the environmental logic, developing and testing. Such a contradiction would potentially slow down NF vendors to deliver NFs and become an obstacle to the prosperity of the NFV technology.
In this paper, we would explore the possibility to build an NF development framework which can rapidly build NFs for diverse platforms. We make an empirical study on several existing NF platforms, and categorize them to two classes — execution environments and management platforms. The first class requires NFs to be developed with a certain piece of logic to be explicitly declared (i.e., programming abstractions), and the second class needs both declared a piece of logic and instrument logic to the NF program structure.
Therefore, we design an NF development framework named NFD. NFD consists of a domain-specific language (DSL) and a compiler. The NF language is platform-independent and has several built-in programming abstractions (we summarize them from existing frameworks and also add our own abstractions). The compiler backend has interfaces to operate on the NF program syntax tree so that programmer can instrument the program structure. Such a design decouples NF's functional logic and environmental logic — developing m NF models and n platform plugins could achieve mn NF implementations ($m + n < mn$ when $m > 1$ and $n > 1$).
Scope. NFD is constructed based on the empirical study of existing platforms. It can support these existing platforms, but has no guarantee to support possible platforms in the future. But as long as the platform has the same development requirements to NFs — specific programming abstractions and certain program structures — new platforms can be integrated to NFD following the same methodology in this paper.
We prototype NFD, and develop 14 NFs on 6 platforms. The platforms are standard Linux, DPDK, GPU, SGX, OpenNF, and OpenNetVM. The evaluation shows that NFD can be used to develop NFs with environmental adaptation, correct logic, and satisfactory performance. In this paper, we make the following contributions.
- Design and prototype NFD, the first solution for cross-platform NF development. NFD leverages domain-specific language and compiler technologies to decouple packet processing logic and environment adaption logic, which can significantly reduce NF delivery cycle.
- Implement and contribute 14 NFs on 6 platforms as well as a commodity equivalent complex NF to the community. These NFs are validated to have correct functional logic and satisfactory performance compared with commodity NFs, and the development process shows the NFD can reduce
Wenfei Wu is the corresponding author.
¹They can be the NF developers in Azure or AWS [1], or traditional device vendors who provide software version, e.g., Palo Alto Networks or Cisco or Juniper, or new NFV startups [4].
development workload.
II. PROBLEM ANALYSIS
We elaborate the empirical study on existing NF platforms and the intuition from them. And then we give the overview of the development framework.
A. Study of NF Platforms
NF platforms fall into two categories. Some of them focus on improve NF performance or security in the data plane, and the others focus on the interaction of NFs with the control plane.
1) Execution Environments: A class of NF frameworks provide new execution environments to NFs [5]–[11], and they usually replace a certain piece of logic in NFs with optimized implementation.
Example of accelerating NF I/O. Data Plane Development Kit (DPDK) allows an application to send/receive packets directly to/from NICs, which bypasses the protocol stack in OS kernels. DPDK can significantly accelerate packet I/O in NFs, and thus draws wide attention [6]. To apply DPDK in many existing NFs, the NF developers need to identify the packet I/O logic in NF programs and replace the I/O function as well as the corresponding data structure — replace the char* pkt and pcap_loop() in libpcap by struct rte_mbuff and rte_eth_rx_burst() in DPDK.
Example of accelerating pattern match in NFs. GPU naturally supports parallel processing, which is applied in NFs to process multiple packets or multiple chunks in one packet in parallel (e.g., pattern match, parallel encryption [7]–[9]). When applying GPU acceleration, NF developers need to identify the location of the operators2, build the GPU-based implementation, and conduct the replacement. Similarly as the example above, this replacement needs to be performed on NFs one by one.
Example of securing NF states. Outsourcing NFs into an untrusted environment (e.g., a public cloud) usually causes security concerns for NF users. A set of work proposes to apply Intel SGX to protect NF states from the untrusted underlying OS [10], [11]. However, this modification is still non-trivial: the NF developer needs to identify the sensitive code and data in the NF (usually NF states) and seal them with SGX abstractions. For example, when Han et al. port an IDS to Intel SGX, it brings about 2.5k extra lines of code in the modification [11].
Intuition. We observe that the development (or porting) of NFs for the platforms usually focus on a specific piece of logic and implement it in an optimized way. Thus, the intuition to build NFs universal to these platforms is to use high-level programming abstractions to replace the code in the programs, and use the compiler to link the programming abstractions to platform-dependent implementation at link time.
2) Management Platforms: The second class of NF platforms [1]–[4] integrate NFs with the control plane from better management. They still operate on a certain piece of logic, but they further need to instrument NFs to achieve the interaction.
Example of integrating NFs with state management framework. OpenNF is a network controller which jointly controls flow routing and NF placement in a network. It could flexibly scale NFs out and in. To integrate an NF with OpenNF, the NF developer needs to add a local agent in an NF, which communicates with the OpenNF controller and operates on NF local states (add/remove/modify). This is usually not a trivial process; as described by [2], [12], modifying PRADS and Snort takes more than 100 man-hours respectively.
Intuition. We observe that these platforms not only operate on a piece of logic in NFs but also need to instrument the program to interact with that logic in the NF program structure. Thus, the intuition to build NFs for these platforms is to further build a compiler backend, which can traverse the NF program structure and apply changes to all NFs.
B. Solution Overview
Towards the goal of building rapid development framework, we take two techniques to build our solution — domain specific language and program compilation time optimization (more precisely, it should be transformation).
We propose an NFD language to program NFs. The language first contains common language elements such as basic types, expressions, statements, and control flows in high-level language (e.g., C/C++). More importantly, it declares some certain elements as programming abstractions. The programming abstractions are summarized from the individual NF porting cases [2], [11], current NF development frameworks (e.g., Netbricks [13]), and some network management solutions (e.g., NF placement, verification). NF developers use the language to write NF (behavior) models.
As Fig.1 shows, NFD has a compiler to translate the model to an executable and integrate platform specific features. The compilation process first translate an NF model to a syntax tree (compiler frontend parser), and then traverse the syntax tree to generate runnable code (compiler backend). NFD compiler’s backend would generate C/C++ code in Linux in preliminary, and also provides interfaces of (1) the syntax tree of the NF model and (2) a tree traversal API, which can overwrite the tree-to-code translate or instrument other logic. Thus, a platform developer could use the interfaces to build platform “plugins”. By combining an NF model and a platform plugin,
NFD would generate executable that both has the NF model functionality and adapts to the platform.
III. NFD LANGUAGE AND NF MODELS
We introduce the NF modeling language and the corresponding NF programming abstractions. And later we show the representative SMAT model structure and several NF examples.
A. NF Modeling Language
General Program Elements
<table>
<thead>
<tr>
<th>General Program Elements</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>const</strong> c ::=</td>
</tr>
<tr>
<td><strong>variable</strong> e ::=</td>
</tr>
<tr>
<td><strong>expression</strong> e ::=</td>
</tr>
<tr>
<td><strong>condition</strong> x ::=</td>
</tr>
<tr>
<td><strong>model</strong> model ::=</td>
</tr>
<tr>
<td><strong>statements</strong> stmts ::=</td>
</tr>
<tr>
<td><strong>statement</strong> stmt ::=</td>
</tr>
<tr>
<td><strong>if statement</strong> if ::=</td>
</tr>
<tr>
<td><strong>loop statement</strong> loop ::=</td>
</tr>
</tbody>
</table>
NFD Specific Extension
| header field h ::= | sip| | dip| | sport| | dport| | proto| ...
| state s ::= | declare state s |
Fig. 2: NFD language for NF models
Figure 2 shows the NFD language syntax. Note that this language is a summary of several existing solutions [14]–[17] — it could equivalently express the same semantics as existing solutions, but is also enriched with several new abstractions (§ III-B). Alternative language designs are also acceptable as long as they can express NF logic.
NFD language contains basic language elements in general high-level programming languages (e.g., C, C++, Rust), including basic types, expressions, statements, and control flows3, so that the language can be semantically complete to express all existing NFs developed in high-level languages. The semantics of these elements are the same as that in the high-level language, and we omit it here for space. A formal definition of the language semantics is in [18].
We further introduce a few NF specific extension. The extension does not change the program language syntax, but it explicitly declares some elements as NF specific logic. We reserve a few keywords (sip, dip, etc.) to represent packet header fields, they refer to the current packet at runtime. We explicitly declare NF state as “state variable”.
We further define a few derived symbols and notations in Table I to simplify the text description. The “[]” operator has multiple meanings: (1) if the input is a packet header field (e.g., sip, dip), it would parse the header to the corresponding layer and further fetch or modify the field value; (2) if the input is a tag (e.g., BR, output in III-C), the operator would look up or modify the map structure [19]; (3) if the input is an attribute (e.g., size in the rate limiter example below), the operator would compute and return corresponding value;
<table>
<thead>
<tr>
<th>Table I: Derived symbols in NFD language</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>symbols</strong></td>
</tr>
<tr>
<td>f[h]</td>
</tr>
<tr>
<td>f[TAG]</td>
</tr>
<tr>
<td>r</td>
</tr>
<tr>
<td>R</td>
</tr>
<tr>
<td>f ∈ R</td>
</tr>
</tbody>
</table>
We allow users to implement their own programming abstractions. All these abstractions are denoted as UD_Op("Func_Name", *args), but the user should also provide a corresponding implementation like void Func_Name(*arg). They are in the class of “Expr_Op”.
In NFD, we implement “Encrypt” (encrypting a byte stream), “PatternMatch” (searching a pattern in a byte stream), “hash” (in hash-based load balancer), and “NAT” (a non-replacement IP sampling algorithm for IP translation). We also create two long-neglected abstractions in NFD—state abstraction and time-driven logic abstraction.
State abstraction. An NF state is usually associated with a flow of certain granularity, and all operations on the state should fall on one specific instance of that granularity. For example, a 5-tuple “per-flow packet counter” is actually not one counter, but stands for a set of instances with each instance counting one 5-tuple flow’s packets. Such states are usually declared as a group of variables in existing frameworks (e.g., array “int counter[1000]” or “map<flow, int> counter”), each state update needs to be accompanied with a lookup operation in the group of variables.
NFD uses a class to abstract the NF state. The state class has an attribute describing its granularity (i.e., a list of header fields, e.g., 5-tuple). The class maintains concrete instances of the same granularity internally, and all operations upon the state class would fall on an instance (by default of the current flow in processing). In the per-flow counter example above, the counter should be declared as
```plaintext```
int counter <sip, dip, sport, dport, proto> = 0.
```
The instances of a state class are allocated on demand: NFD
overrides all operators to the state class; once a state is operated on, the operator function would first check whether “current flow” has an corresponding instance of that state; if no, a new instance of the flow would be created and added to the state instance map; and then the operator proceeds with the instance. Figure 3 shows the implementation of the state class in the per-flow monitor example including attributes, instances, and an overridden operator ++.
**Time-driven logic abstractions.** Some NFs contain time-driven logic; for example, a rate limiter “periodically” refreshes tokens for packet dispatching. This abstraction was not proposed in existing NF development frameworks. NFD captures it by adding an operator $\text{timer}(\text{flow}, \Delta t)$ to describe resubmitting a flow after time $\Delta t$, which complements the NF time-driven logic.
**Advantage.** Using programming abstractions instead of self-development has two benefits. (1) It helps the developer to reuse the code and avoid making mistakes. For example, using the state abstraction above can save development effort to rebuild it, and avoid mistakenly declaring a state in Figure 3 as a single variable. (2) It helps NFD to locate the “platform specific target piece of logic” in the future platform integration (§ V). For example, the state abstraction helps to find the state and integrate to state management system [2].
### C. A Model and Use Cases
<table>
<thead>
<tr>
<th>Stateful Match Action Table</th>
</tr>
</thead>
<tbody>
<tr>
<td>entry</td>
</tr>
<tr>
<td>SMAT</td>
</tr>
</tbody>
</table>
Fig. 4: SMAT syntax
We show a few cases where NFs are developed using NFD language. As discussed a few existing works [24] and exercised by industry [1], a wide range of NFs can be implemented as a stateful match-action table (SMAT), whose structure can be expressed as Fig. 4 in NFD. We visualize the programs in tables for a better view and space limitation. SMAT’s semantics is that each entry first match flows and states, and if the match result is true, the action is taken and stop, otherwise, proceed to the next row (i.e., the first match applies).
Figure 5 shows the example of a stateful Firewall, a stateful NAT, and a load balancer that stores the consistent mapping of a flow to a backend server. In addition, we design a rate limiter to validate the $\text{timer}$ operator—a rate limiter in Figure 6. It uses the leaky bucket algorithm: the rate limiter refreshes tokens periodically; and for each traversing packet, if there are enough tokens left, it is sent by consuming them; otherwise, it is discarded.
### IV. NF Compilation
NFD compiles NF models to NF programs and also provides the syntax tree of the NF model.
**NF Code Generation.** NFD compiles an NF model to a C++ NF program by the following transformation. (1) Most basic elements (e.g., control flows, expressions, predicates, and policies) in NFD language can be implemented in C++ directly. (2) States are declared and initialized as global variables at the beginning of the program. (3) Time-driven logic is incorporated as Fig. 7 depicts. The program initialization and the flow processing can add time events to the timer event queue; the timer signal handler calls the flow processing logic recursively. The timer signal is masked at the beginning of each pass of flow processing and unmasked at the end. Thus, timer events would not interleave with the flow processing iteration, preventing timer events from preempting flow processing and mistakenly polluting states in use. (4) All NFs share a common program skeleton for packet I/O: the compiler declares and initializes states at the beginning of the program, wraps up NF model code in an infinite loop and adds a flow receiving/sending function at the beginning/end of the loop. Thus, the NF program would repeatedly fetch and process flows.
Fig. 5: Examples of NF models
```
Init: tkn := TOKEN, f = map, ok := NAT();
flow := [flow];
been := seen;
```
```
Match | Action
--- | --- | ---
flow | state | flow | state
--- | --- | --- | ---
[flow] := [REF] | [output] := [IFACE] | tkn := TOKEN |
* | [flow] := [IFACE] | tkn := [tkn] |
* | [output] := [IFACE] |
```
Fig. 6: The model of a rate limiter
### C. A Model and Use Cases
<table>
<thead>
<tr>
<th>Stateful Match Action Table</th>
</tr>
</thead>
<tbody>
<tr>
<td>entry</td>
</tr>
<tr>
<td>SMAT</td>
</tr>
</tbody>
</table>
Fig. 4: SMAT syntax
We show a few cases where NFs are developed using NFD language. As discussed a few existing works [24] and exercised by industry [1], a wide range of NFs can be implemented as a stateful match-action table (SMAT), whose structure can be expressed as Fig. 4 in NFD. We visualize the programs in tables for a better view and space limitation. SMAT’s semantics is that each entry first match flows and states, and if the match result is true, the action is taken and stop, otherwise, proceed to the next row (i.e., the first match applies).
Figure 5 shows the example of a stateful Firewall, a stateful NAT, and a load balancer that stores the consistent mapping of a flow to a backend server. In addition, we design a rate limiter to validate the $\text{timer}$ operator—a rate limiter in Figure 6. It uses the leaky bucket algorithm: the rate limiter refreshes tokens periodically; and for each traversing packet, if there are enough tokens left, it is sent by consuming them; otherwise, it is discarded.
### IV. NF Compilation
NFD compiles NF models to NF programs and also provides the syntax tree of the NF model.
**NF Code Generation.** NFD compiles an NF model to a C++ NF program by the following transformation. (1) Most basic elements (e.g., control flows, expressions, predicates, and policies) in NFD language can be implemented in C++ directly. (2) States are declared and initialized as global variables at the beginning of the program. (3) Time-driven logic is incorporated as Fig. 7 depicts. The program initialization and the flow processing can add time events to the timer event queue; the timer signal handler calls the flow processing logic recursively. The timer signal is masked at the beginning of each pass of flow processing and unmasked at the end. Thus, timer events would not interleave with the flow processing iteration, preventing timer events from preempting flow processing and mistakenly polluting states in use. (4) All NFs share a common program skeleton for packet I/O: the compiler declares and initializes states at the beginning of the program, wraps up NF model code in an infinite loop and adds a flow receiving/sending function at the beginning/end of the loop. Thus, the NF program would repeatedly fetch and process flows.
Fig. 5: Examples of NF models
```
Init: tkn := TOKEN, f = map, ok := NAT();
flow := [flow];
been := seen;
```
```
Match | Action
--- | --- | ---
flow | state | flow | state
--- | --- | --- | ---
[flow] := [REF] | [output] := [IFACE] | tkn := TOKEN |
* | [flow] := [IFACE] | tkn := [tkn] |
* | [output] := [IFACE] |
```
Fig. 6: The model of a rate limiter
### C. A Model and Use Cases
<table>
<thead>
<tr>
<th>Stateful Match Action Table</th>
</tr>
</thead>
<tbody>
<tr>
<td>entry</td>
</tr>
<tr>
<td>SMAT</td>
</tr>
</tbody>
</table>
Fig. 4: SMAT syntax
We show a few cases where NFs are developed using NFD language. As discussed a few existing works [24] and exercised by industry [1], a wide range of NFs can be implemented as a stateful match-action table (SMAT), whose structure can be expressed as Fig. 4 in NFD. We visualize the programs in tables for a better view and space limitation. SMAT’s semantics is that each entry first match flows and states, and if the match result is true, the action is taken and stop, otherwise, proceed to the next row (i.e., the first match applies).
Figure 5 shows the example of a stateful Firewall, a stateful NAT, and a load balancer that stores the consistent mapping of a flow to a backend server. In addition, we design a rate limiter to validate the $\text{timer}$ operator—a rate limiter in Figure 6. It uses the leaky bucket algorithm: the rate limiter refreshes tokens periodically; and for each traversing packet, if there are enough tokens left, it is sent by consuming them; otherwise, it is discarded.
### IV. NF Compilation
NFD compiles NF models to NF programs and also provides the syntax tree of the NF model.
**NF Code Generation.** NFD compiles an NF model to a C++ NF program by the following transformation. (1) Most basic elements (e.g., control flows, expressions, predicates, and policies) in NFD language can be implemented in C++ directly. (2) States are declared and initialized as global variables at the beginning of the program. (3) Time-driven logic is incorporated as Fig. 7 depicts. The program initialization and the flow processing can add time events to the timer event queue; the timer signal handler calls the flow processing logic recursively. The timer signal is masked at the beginning of each pass of flow processing and unmasked at the end. Thus, timer events would not interleave with the flow processing iteration, preventing timer events from preempting flow processing and mistakenly polluting states in use. (4) All NFs share a common program skeleton for packet I/O: the compiler declares and initializes states at the beginning of the program, wraps up NF model code in an infinite loop and adds a flow receiving/sending function at the beginning/end of the loop. Thus, the NF program would repeatedly fetch and process flows.
After compilation, most basic language elements are naturally supported by C++ (e.g., arithmetic operator, control flows). Remaining operations are supported by the NFD library including some complicated operators (e.g., “PatternMatch”, “Encrypt”), “flow” class with “[]” as in § III-A and “state” class as in § III-B. In the final compilation from a C++ program to a binary executable, they would be linked together.
**NF Syntax Tree.** An NF model built on the NFD language syntax would follow a tree structure: deriving a program to composing sections (i.e., I/O, initialization, model), deriving each section to statements (i.e., match and action), and deriving each statement to basic symbols (variables, constants, and variables).
For example, Fig. 8 shows the syntax tree of a SMAT, where the root “program” derives an “init” block and a “loop” block, the “Match-Action Table” block derives multiple “entry” blocks, and each entry can derive the predicate and policy statements (omitted in the figure). The variables, constants, and operators in predicate and policy statements are basic symbols, and the remaining nodes are deriving symbols.
**V. NF-ENVIRONMENT INTEGRATION**
NFD provides interfaces to operate on NF syntax tree, by which environmental features can be added to the NF program. We show the cases in § V-B, where the integration is performed automatically using NFD.
**A. Programming Interfaces**
The programmable interfaces for platform integration are a tree traversal function and per-symbol callback functions. (1) According to the NFD syntax, NFD compiler would generate one callback function for each symbol. For example, it would generate a `visitSMAT()` for the symbol SMAT and a `visitEntry()` for Entry in Fig.8. These callback functions are initially empty, but can be overridden by programmer to add their logic. (2) NFD provide a tree-traversal function for syntax trees. The function is input with a syntax tree, it traverses the tree with a depth-first-search (DFS) order. When visiting each node on the syntax tree, the function would check the type of the node (e.g., SMAT or Entry or Init) and invoke the corresponding callback function.
Note that the preliminary compilation in § IV is also implemented by this interface. The preliminary compiler traverses the NF model’s syntax tree and translate each node to corresponding C/C++ implementation. Integrating an NF with new platform needs the programmer to inherit the preliminary compiler and override the per-symbol callback function. Programmers can add new logic in the callback function or even replace the original one. We call the class that inheriting the compiler a “platform plugin”.
**B. Use Cases**
We walk through the examples in §II to show how NFD performs the integration.
**Example of IO acceleration with DPDK.** We first identify the abstraction of IO is “Receive” symbol. Then we inherit the preliminary compiler and create a DPDK platform plugin named `DPDKVisitor`. In the plugin we override the callback function named `visitReceive()`. In the callback function `visitReceive()`, we do not call the super class (i.e. the compiler), but add DPDK implementation (`rtelibeth_rx_burst()`). In other callback functions, we just call the super class’s method. We execute the `DPDKVisitor.visit(syntax_tree)` to traverse the syntax tree again to generate new programs; the IO logic is replaced and others are not changed.
**Example of GPU acceleration.** Similarly to the DPDK acceleration, GPU acceleration is to override the operator `PatternMatch()`. It use an GPU implementation to replace the CPU one.
**Example of integrating with OpenNF.** Integrating to OpenNF is a bit complicated, but the workflow is the same. Each NF needs to make three modifications: (1) adding the agent code which starts the agent thread in the initialization, (2) adding a collection of all states in the NF so that they are retrievable in state operations, and (3) implementing the interfaces of state operations (get/put/delete).
We build an OpenNF plugin for (1) and (2), and build an external library for (3). Fig. 9 shows part of plugin. The plugin override visitInit() and add the logic to start the OpenNF agent and declare allStates variable (line 6); it then overrides the visitStateDeclaration(), where the name of each state variable is added to allStates. By calling OpenNFVisitor.visit(syntax_tree), the code about (1) and (2) are instrumented to the final NF code.
In the external library, when get/put/delete a flow is called, the “List of all states” is iterated, each state would use [flow] to operate on the corresponding state instance. NFD links the external library with the OpenNF plugin generated code, and achieves an executable integrated with OpenNF.
Example of adding SGX protection. SGX protection needs to find out all NF state variables and state related functions, and seal them in a specially protected memory region. Fig. 10 shows the plugin for this. It overrides the visitStateDeclaration(), visitStateMatch(), and visitStateAction(). In each overriding function, the plugin collect the variable names and function names. Finally, the plugin outputs the list of state variables and functions. NFD use the list to generate an SGX configuration and compile the code with configuration, and outputs an SGX-enhanced executable.
VI. IMPLEMENTATION
<table>
<thead>
<tr>
<th>Component of NFD</th>
<th>Lines of Code</th>
</tr>
</thead>
<tbody>
<tr>
<td>NFD model grammar</td>
<td>234 (g4)</td>
</tr>
<tr>
<td>compiler frontend (automatically derived by Antlr)</td>
<td>4.3k (Java)</td>
</tr>
<tr>
<td>compiler backend (generate C++ NF programs)</td>
<td>1137 (Java)</td>
</tr>
<tr>
<td>C++ template (program structure, operators) for NFs</td>
<td>752 (C++)</td>
</tr>
<tr>
<td>extension for OpenNF</td>
<td>489 (C++)</td>
</tr>
<tr>
<td>extension for GPU</td>
<td>668 (C++)</td>
</tr>
<tr>
<td>extension for DPKD</td>
<td>167 (C++)</td>
</tr>
<tr>
<td>extension for SGX</td>
<td>273 (C++)</td>
</tr>
</tbody>
</table>
NFD Implementation. We write the syntax of NFD language in g4, and use Java Antlr4 to build the NFD compiler frontend (i.e., the parser) and the platform integration interfaces (syntax tree traversal function and callback functions). Then we implemented the preliminary compiler and plugins for different platforms. The lines of code of some components are listed in Table II.
The NFD compiler has a few tunable parameters. (1) It can configure whether to generate a packet NF or a bytestream NF. A packet NF operates on each packet using pcap library [25] and a bytestream one operates on each flow using socket. For example, a packet LB modifies the destination IP and port for each packet, while a stream-level LB terminates incoming TCP connection and relays byte streams to the next TCP connection. As the NF types is configured, the compiler would also perform a semantic check: packet operators cannot be applied to bytestreams and vise versa. (2) For environmental plugins (OpenNF, Intel SGX, GPU, DPDK), the NFD compiler has arguments to decide whether to add the plugins to NF programs.4
NF Development. We developed 14 NFs using NFD, spanning security-featured NFs (e.g., Firewall, heavy hitter detector, and flood detector), LBS (layer-3 and layer-4), NAT, monitors, and rate limiters. The typical NFs in Fig. 5 are used for representing results in this section. A complete list and testing results are in [18]. In addition, we also collect several commodity NFs to compare with NFD-based NFs (for logic and performance): they are Snort, PRADS, Balance, HAProxy, and Click NAT [26]–[30].
Experiment Settings. All NF tests are on three servers connected to one switch, each server with Intel i9 CPU (10-core, 20-thread), 128GB memory, 10Gbps NIC, three NVIDIA GTX1080 Ti graphics cards, and 1TB SSD. And we collect the network traces in [31] to test our NFs. An NF experiment is performed in one of the following four ways: (1) Unit-1host: the network I/O is removed, and a prepared trace file is injected directly into the NF’s processing logic, and the NF runs merely on one host; (2) NS-1host: an NF runs as a native process on one physical host, and it is chained to a sender and a receiver on the same host using Linux Network Namespace and Open vSwitch (OVS) [32]; (3) VM-1host: the NF is wrapped up by a VM (using KVM [33]), and the NF-residing VM is chained to a receiver VM and a sender VM by OVS on the same host; (4) Native-2hosts: the NF runs on one host as a native (non-virtualized) process, and it is chained with a sender and a receiver on another physical host.
VII. EVALUATION
We show that NFD can save development workload, its NFs are functionally valid, the integration of NFs with platforms works correctly, and NFD can be used to develop complex NFs that equivalent to commodity ones.
A. Saving Development Workload
Comparing the LoC. Theoretically if we want to build n NFs in m environments, the traditional development method would cause a workload of O(nm), while NFD can presumably reduce the workload to be O(n + m).
We use the lines of code (LoC) to quantify the development workload. As in TABLE II, building the NFD framework needs 2123 LoC (234 for language grammar, 1137 for compiler backend, and 752 for NF template; the 4.3k LoC of derived frontend parser is not counted). With this platform established, each of the NF models costs 20 LoC on average. And the four environments cost 1597 LoC in total (489 for OpenNF, 668 for GPU, 167 for DPDK, and 273 for SGX). Thus the total development workload is 1877 LoC.
Among all final NF programs, their platform independent logic is usually 750+ LoC (from the template and SMAT). GPU platform works only for bytestream NFs (IDS, encryption), but the other three works for all. The combination
4By the time of this project, Intel SGX compiler does not support C++ STL. We replace C++ STL classes that are used in NFD by self-developed code. This change does not affect any steps of NFD. It only requires the SGX compiler to compile NF programs with the self-developed code. It would be resolved when Intel releases a SGX compiler supporting C++ STL.
of 14 NFs and 4 platforms cost totally about 700k LoC ($750 \times (489+167+273)+668 \times 2$). Without NFD, these workloads would be undertaken by human programmers, which is a significant burden compared with the NFD approach (700k vs. 1877).
**Case study of SGX.** In another empirical study, we compare the development man-hour of a non-NFD SGX-enhanced network monitor with that of a NFD-based one (details in § VII-C). For one of the authors, without NFD, it takes one week to learn SGX programming from an SGX expert, it takes another two days to build an SGX-enhanced network monitor (totally 72 man-hours for the student and some consulting time with the SGX expert). While using NFD, the graduate student spent one hour writing a network monitor SMAT model and less than one day building a SGX plugin (following the requirement from SGX), which only takes 8 man-hours for the student. Last but not least, that SGX plugin can be applied to any NF SMAT model in the future. NFD shows good potential to improve productivity.
**B. Individual NF Validation**
We show that NFD outputs logically correct NFs.
**Logical correctness.** Our basic methodology to validate an NF’s logic is to use traces to test whether the NFD-based NF has the same behaviors (to packets) as expected (either a commodity NF or pre-computed results). We compare the following pairs of NFs: (1) NFD-based Firewall v.s. Snort (using first 1M packets from the trace and tuning alert rules), (2) NFD-based bytestream LB v.s. Balance and HAProxy (tuning round-robin or hash mode), (3) NFD-based NAT v.s. Click NAT (tuning internal and external address pools).
In the test result (in [18]), if NFs perform deterministic behaviors (e.g., IDSes, round-robin LBs), NFD-based NFs have the same behavior with the commodity NFs; if NFs have random behaviors (e.g. hash-based LB, NATs), the behaviors for each individual flow are not exactly the same between the NFD-based NFs and the commodity ones, but flows’ total behaviors for a pair (of the NFD-based NF and the commodity one) follow the same distribution (e.g., uniform distribution from frontends to backends in hash-based LBs, non-collision mapping from an internal address pool to an external one in NATs).
We then test whether NFD-based rate limiter has the expected rate control for flows. We set up a VM-1host experiment for the rate limiter, and tune the sending rate and the packet size. We draw the actual packet processing rate and throughput in Fig. 11. We conclude that there is an upper bound of the packet processing rate, which is about 1.67 mpps (the 64B bar of the rightmost group). And if the target rate is not too large to exceed this packet processing rate (i.e., Control_Rate/Packet_Size < 1.67 mpps), the rate limiter can control the sending rate accurately as the configuration.
**Performance (Unit-1host).** We test whether NFs’ performance is acceptable. We repeat the unit test on Firewall, tune the number of rules and granularity of rules, and measure the throughput and packet processing rate. Fig. 12 shows the performance in the case of 10 rules which deny traffic with a few IP (layer-3) or IP+Port (layer-4). We observe that (1) NFD-based firewall performs significantly better than Snort (2.5mpps v.s. <0.5mpps). By looking into the code, we find that the performance gap is from the implementation of flow-rule match: Snort uses linked list to store rules from the config and matches a packet one by one; but the rules in the NFD model are finally embedded into the code. Hence, our firewall has better performance. (2) The NFD-based firewall configured with layer-3 rules has better performance than that with layer-4, but Snort does not show this trend. The reason is that Snort blindly parses any packets to layer 4, but NFD firewall would adapt the parsing depth to the configuration.
The performance of bytestream LBs lies in Fig. 13. The experiments are under Unit-1host (using the socket for inter-process communications between the sender, the NF, and the receiver). LBs are in round-robin mode and there are five backend servers in each experiment. We tune the number of incoming flows from the frontend. We observe that (1) NFD-based LB always has higher throughput than HAProxy, and it also outperforms Balance when there is only one flow. The reason is that Balance and HAProxy are commodity NFs with a lot of extra features (e.g., group-based round-robin in Balance, consumer-producer based I/O model in HAProxy). Although we carefully turn unused features off to make the comparison fair, the Balance and HAProxy program still silently execute some unused features, wasting CPU cycles. (2) Balance would outperform the other two LBs when there is more than one flow. The reason is that Balance would create a process (fork) for each new connection, and thus leverage the multiple cores on the machine. But this advantage fails to increase when the server side is fully loaded (i.e., >5 flows for 5 backend servers).
We make a complete test for all NFs and list their performance in [18]. Unit performance tests show that NFD-based NFs have acceptable performance, and in a lot of cases they can be viewed as micro-services without redundant features, which gives them better performance.
**Performance (Native-2hosts).** We put NFs into a synthesized environment to see whether they would become the bottleneck of the system. We choose various NFs (stateful Firewall, stateless Firewall, NAPT, layer-3 LB) and tune the packet size (64B-1500B). Fig. 14 shows the throughput when these NFs use DPDK or libpcap. We observe that libpcap
5We use Snort 1.0 which only contains layer-3 and layer-4 parsing, and thus we can exclude the possibility that Snort has other CPU-consuming logic.
usually achieves <1 mpps throughput, while DPDK NFs can achieve 1-5 mpps throughput. For DPDK NFs, the throughput is constrained by the total bandwidth 10 Gbps. Thus, NFD NFs could process packets at a line rate of the NIC.
C. NF-Platform Integration
We show the use cases of integrating NFs with environments, and measure their performance.
Accelerate processing with GPU. Atop CUDA Toolkit [34], the two operators — encryption and pattern matching — are integrated with bytestream NFs. The performance result is in Fig. 15. We have the following observations: GPU operators need more preparation time (e.g., copy data from memory to GPU), but accelerate performance by parallel computation. In Fig. 15a, GPU is slower in encrypting <6KB bytestream, but faster in large bytestreams; because the encryption chunks a bytestream and encrypts blocks in parallel. Similarly (Fig. 15b), GPU could match >5K patterns at a faster speed than CPU, but slower for <5k patterns, because these patterns are matched in parallel. GPU operators perform worse than CPU when preparation time dominates the process.
Alternative packet I/O using DPDK. NFD provisions NFs with different packet I/O drivers (DPDK and Libpcap) and deploys them in the path of two end-hosts. Fig. 16 shows that end-to-end RTT in VM-based test. Benefiting from the kernel bypass technology, DPDK has about 10X smaller RTT than libpcap (405µs v.s. 6952µs).
NF state management with OpenNF. We port NFD-based NFs to an OpenNF platform. We use NFD-based Firewall to replace the NFs in the state move experiment in the OpenNF report (§8.1.1 and Fig.10 in [2]) and repeat the experiment. We observe NFD-based Firewall successfully interacts with the OpenNF controller, and the experiment result is in [18]. We draw the similar conclusion as in OpenNF [2]: (1) the stricter state migration requirement (no guarantee (NG) > loss-free (LF) > order-preserving (OP)) makes the state move time and packet latency longer; (2) the optimizations (parallelizing (PL) and late-locking-early-release (LLER)) in OpenNF improve the state move time and packet latency.
Enhance NF security with SGX. We use NFD to generate three pairs of NFs — flow counter (FC), packet load balancer (LB), and NAT. Each pair has one NF without SGX protection and one with it to protect states. We set up Unit-1host for these NFs, tune the number of flows, and measure their performance in Fig. 17. We observe that NFD NFs can achieve 1 mpps in SGX environment, which is acceptable. But compared with the same setting without SGX, where the throughput is usually >10 mpps⁶, we conclude that SGX environment is the bottleneck.
D. Case Study: Complex NF Development
Replace NF chains by consolidating them. In current NFV systems, NFs are fixed and chained to get complex functionality. NFD could provide an alternative solution — consolidating NF models and generate one executable.
We use the example in § I, where a network client needs a load balancer with blacklisting. This can either be implemented by chaining a firewall and a load balancer (denoted as “FW→LB”) or by consolidating a firewall model with a load balancer model using NFD (denoted as “FW+LB”). Fig. 18 shows the CDF of the time of delivering packets from the sender to the backend server on KVM testbed with these two approaches.
We observe that each NF (FW, LB) increases the network latency (median) from 5087µs (baseline, “No NF”) to 12895µs (FW) or 12637µs (LB), and chaining NFs doubles the latency (20331µs in “FW→LB”), but merging them does not increase more latency compared with a single NF (13831µs). Thus, NFD provides a more appealing alternative approach for NF chaining.
Build a complex NF equivalent to a commodity NF. pfSense [35] from Netgate has now become a prevalent NF for in-network security. It embraces several core features in the data plane, including firewall, NAT, LB, and rate limiter⁷. We make an equivalent implementation in NFD by concatenating
---
⁶This number is large. Because SGX does not support C++ STL, and we replace the map data structure by an array in all four NFs.
⁷For other features, pfSense can provide other off-path data plane services such as DHCP and DNS, which should be provided independently instead of being synthesized with the four on-path functionalities. The web GUI of pfSense is in the control plane, and we do not consider it in NFD.
the SMATs of these four NF models and compiling the merged model to a synthesized NF program.
Evaluation shows that the NFD-based NF has equivalent logic compared with pfSense, but a small performance degradation (Fig. 19, e.g., 1.68 vs. 2.08 gbps when packet size is 512B). The degradation is caused by the non-optimal (but universal) data structure and redundant parsing in NFD, which is the tradeoff between the agile development and performance and can possibly be improved using compilation optimization techniques.
VIII. DISCUSSION AND RELATED WORK
Design choice of NFD Language. NFD “summarizes” the programming abstractions from existing development and modeling frameworks (with two own new abstractions). The language is interchangeable with most existing languages, e.g., SNAP [17], VFP [1], P4 [36]. As long as the the compiler of other frameworks could provide the same syntax tree modification interfaces, the methodology in NFD can be applied similarly.
Deployment progress. NFD is currently anonymously open sourced in [18]. Its model-based NFs have recently been released on OpenNetVM [37].
NF development frameworks Most recent NF development frameworks target one environment or platform, and summarize NF programming abstractions, such as packet parsing, filtering, transformation [12], [13], and NFD complements the part for cross-platform integration. Modular NF development [21], [38] eases the composition of NFs, but they do not target cross-platform deployment, because intra-module logic are still ad hoc and not designed to be platform independent.
NF management frameworks Traditional NF management frameworks [3], [37], [39]–[41] view NF as monolithic logical unit, which does not help logic design inside NFs. And several platform specific development/porting solutions are bound with the environment related features (e.g., SGX, GPU, OpenNF, DPDK) [2], [6], [7], [9]–[11], [34], [42], [43], which lack cross-platform abstractions. Thus, we believe NFD would complement the insufficiency of existing development frameworks/methodologies by NF logic (re-)design and cross-platform adaptation.
NF Frameworks for other purposes. A few recent NFV frameworks are proposed for various requirements [44], [45], into which NFD would be a great help to port NFs. SNF [38] proposed DAG-based NF chains can be synthesized to eliminate cross-NF redundancy, where NFD NF models can contribute to the synthesizing. Like DPDK, other IO acceleration solutions (e.g., mTCP [46]) can also override NFD I/O. CHC requires NF states to be identified for integration [47]; Metron [41] requires to anatomize NFs and offloads stateless part to hardware; VFP and Eden [1], [22] also implement NFs on both software and hardware. For these frameworks, NFD would be beneficial by automating NF program analysis and porting using its compiler plugin.
Other Inspirations. NFD is inspired by several works. (1) Packet processing operators (e.g., “resubmit”) are also used in OVS and P4 [32], [36]. (2) Devoflow [48] proposes “rule clone” to control switch rule explosion, and NFD adopts this idea in the state abstraction. (3) The model language is inspired by several NF modeling works (like DFA [49] and stateful table [24], and also NF modeling language [17]).
IX. CONCLUSION
We built a cross-platform NF development framework named NFD. It has a platform-independent language to develop NF models; its compiler provides interfaces to operate on the model and integrate platform-specific features. We show the cases to develop 14 NFs with 6 platforms. Our evaluation demonstrates NFD’s feasibility by developing NFs with less workload, valid logic and performance, platform compatibility, and commodity-equivalent complex logic.
ACKNOWLEDGMENT
This project is supported by National Natural Science Foundation of China Grant No. 61802225 and the Collaborative Research Award from Microsoft Research Asia.
REFERENCES
|
{"Source-Url": "http://hongyi-huang.com/files/2021_INFOCOM_NFD.pdf", "len_cl100k_base": 11332, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 38518, "total-output-tokens": 12425, "length": "2e13", "weborganizer": {"__label__adult": 0.0004036426544189453, "__label__art_design": 0.00030303001403808594, "__label__crime_law": 0.00030994415283203125, "__label__education_jobs": 0.0004198551177978515, "__label__entertainment": 7.712841033935547e-05, "__label__fashion_beauty": 0.00015664100646972656, "__label__finance_business": 0.00033783912658691406, "__label__food_dining": 0.00031685829162597656, "__label__games": 0.0008106231689453125, "__label__hardware": 0.002811431884765625, "__label__health": 0.0004682540893554687, "__label__history": 0.0002853870391845703, "__label__home_hobbies": 9.137392044067384e-05, "__label__industrial": 0.000591278076171875, "__label__literature": 0.00020015239715576172, "__label__politics": 0.0002541542053222656, "__label__religion": 0.0004398822784423828, "__label__science_tech": 0.05517578125, "__label__social_life": 5.882978439331055e-05, "__label__software": 0.009552001953125, "__label__software_dev": 0.92578125, "__label__sports_fitness": 0.0003037452697753906, "__label__transportation": 0.0006561279296875, "__label__travel": 0.00021851062774658203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50615, 0.01086]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50615, 0.39602]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50615, 0.88335]], "google_gemma-3-12b-it_contains_pii": [[0, 5312, false], [5312, 10524, null], [10524, 15701, null], [15701, 25340, null], [25340, 29375, null], [29375, 35607, null], [35607, 41376, null], [41376, 45777, null], [45777, 50615, null], [50615, 50615, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5312, true], [5312, 10524, null], [10524, 15701, null], [15701, 25340, null], [25340, 29375, null], [29375, 35607, null], [35607, 41376, null], [41376, 45777, null], [45777, 50615, null], [50615, 50615, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50615, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50615, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50615, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50615, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50615, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50615, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50615, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50615, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50615, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50615, null]], "pdf_page_numbers": [[0, 5312, 1], [5312, 10524, 2], [10524, 15701, 3], [15701, 25340, 4], [25340, 29375, 5], [29375, 35607, 6], [35607, 41376, 7], [41376, 45777, 8], [45777, 50615, 9], [50615, 50615, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50615, 0.19469]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
20e3ffabc6fc8a4dc5fa775ca654bd230bbb509f
|
Lecture 1: Algebraic Effects I
Gordon Plotkin
Laboratory for the Foundations of Computer Science, School of Informatics, University of Edinburgh
NII Shonan Meeting No. 146
Programming and Reasoning with Algebraic Effects and Effect Handlers
Course
- Lecture 1: Algebraic effects I
- Lecture 2: Type and effect systems
- Lecture 3: Algebraic effects II
- Lecture 4: Effect handlers
Outline
1. Moggi’s Monads As Notions of Computation
2. Algebraic Effects
- Introduction
- Equational theories
- Finitary equational theories
- Algebraic operations and generic effects
3. Prospectus and Exercises
Outline
1. Moggi’s Monads As Notions of Computation
2. Algebraic Effects
- Introduction
- Equational theories
- Finitary equational theories
- Algebraic operations and generic effects
3. Prospectus and Exercises
The typed $\lambda$-calculus: syntax
### Raw Syntax
#### Types
$\sigma ::= b \mid \sigma \to \tau$
#### Terms
$M ::= c \mid x \mid \lambda x : \sigma. M \mid MN$
### Typing
#### Environments
$\Gamma ::= x_1 : \sigma_1, \ldots, x_n : \sigma_n$
#### Judgments
$\Gamma \vdash M : \sigma$
The typed $\lambda$-calculus: typing rules
Variables
\[ x_1 : \sigma_1, \ldots, x_n : \sigma_n \vdash x_i : \sigma_i \quad (1 \leq i \leq n) \]
Constants
\[ \Gamma \vdash c : \sigma \quad \text{(as given)} \]
Abstractions
\[ \Gamma, x : \sigma \vdash M : \tau \]
\[ \Gamma \vdash \lambda x : \sigma. M : \sigma \to \tau \]
Applications
\[ \Gamma \vdash M : \sigma \to \tau \quad \Gamma \vdash N : \sigma \]
\[ \Gamma \vdash MN : \tau \]
The typed $\lambda$-calculus: semantics in $\mathbf{Set}$
Types
$$[[\sigma]] \in \mathbf{Set}$$
Basic Types
$$[[b]] = \text{(as given)}$$
Function spaces
$$[[\sigma \rightarrow \tau]] = [[\sigma]] \Rightarrow [[\tau]]$$
Environments
$$[[x_1 : \sigma_1, \ldots, x_n : \sigma_n]] = [[\sigma_1]] \times \ldots \times [[\sigma_n]]$$
Semantics: very explicit
Terms
\[ \Gamma \vdash M : \sigma \]
\[ \llbracket M \rrbracket : \llbracket \Gamma \rrbracket \rightarrow \llbracket \sigma \rrbracket \]
Variables
\[ \llbracket x_i \rrbracket(a_1, \ldots, a_n) = a_i \]
Constants
\[ \llbracket c \rrbracket(a_1, \ldots, a_n) = (\text{as given}) \]
Abstractions
\[ \llbracket \lambda x : \sigma. M \rrbracket(a_1, \ldots, a_n) = a \in \llbracket \sigma \rrbracket \mapsto \llbracket M \rrbracket(a_1, \ldots, a_n, a) \]
Applications
\[ \llbracket MN \rrbracket(a_1, \ldots, a_n) = \llbracket M \rrbracket(a_1, \ldots, a_n)(\llbracket N \rrbracket(a_1, \ldots, a_n)) \]
Suppose programs can raise exceptions $e \in E$. Then we want:
$$\Gamma \vdash M : \sigma$$
$$\llbracket M \rrbracket : \llbracket \Gamma \rrbracket \to (\llbracket \sigma \rrbracket + E)$$
(Remember: $X + Y = (\{0\} \times X) \cup (\{1\} \times Y)$)
Function spaces
$$\llbracket \sigma \to \tau \rrbracket = \llbracket \sigma \rrbracket \Rightarrow (\llbracket \tau \rrbracket + E)$$
Variables
$$\llbracket x_i \rrbracket(a_1, \ldots, a_n) = \text{inl}(a_i) = \eta(a_i)$$
Constants
$$\llbracket c \rrbracket(a_1, \ldots, a_n) = \text{inl}(\text{as given})$$
Semantics of Exceptions (cntnd.)
Abstractions
\[
\llbracket \lambda x : \sigma. M \rrbracket (a_1, \ldots, a_n) = \eta(a \in \llbracket \sigma \rrbracket \mapsto \llbracket M \rrbracket (a_1, \ldots, a_n, a))
\]
Applications For \( M : \sigma \rightarrow \tau \), \( N : \sigma \)
\[
\llbracket MN \rrbracket (\gamma) = E\text{-ap}(\llbracket M \rrbracket (\gamma), \llbracket N \rrbracket (\gamma))
\]
where
\[
E\text{-ap} : (\llbracket \sigma \rightarrow \tau \rrbracket + E) \times (\llbracket \sigma \rrbracket + E) \rightarrow (\llbracket \tau \rrbracket + E)
\]
\[
E\text{-ap}(a, b) = \begin{cases}
\text{inr}(e) & \text{(if } a = \text{inr}(e)) \\
\text{inr}(e') & \text{(if } a = \text{inl}(f) \text{ and } b = \text{inr}(e')) \\
f(c) & \text{(if } a = \text{inl}(f) \text{ and } b = \text{inr}(c))
\end{cases}
\]
Moggi’s insight: a categorical view of $- + E$
**Functorial action**
$$f : X \rightarrow Y$$
$$f + E : X + E \rightarrow Y + E$$
where:
$$(f + E)(a) = \begin{cases}
\text{inl}(f(b)) & \text{if } a = \text{inl}(b) \\
\text{inr}(e) & \text{if } a = \text{inr}(e)
\end{cases}$$
**Monadic structure**
**Unit**
$$\eta : X \rightarrow X + E$$
**Multiplication**
$$\mu : (X + E) + E \rightarrow X + E$$
$$\mu(a) = \begin{cases}
\text{inr}(e) & \text{if } a = \text{inr}(e) \\
\text{inr}(e') & \text{if } a = \text{inl}(\text{inr}(e))) \\
\text{inl}(b) & \text{if } a = \text{inl}(\text{inl}(b))
\end{cases}$$
A categorical view of $- + E$ (cntnd.)
Strength
Left strength
$$\text{lst} : X \times (Y + E) \rightarrow (X \times Y) + E$$
Right strength
$$\text{rst} : (X + E) \times Y \rightarrow (X \times Y) + E$$
$$\text{lst}(\langle a, b \rangle) = \begin{cases}
\text{inr}(e) & \text{if } b = \text{inr}(e) \\
\text{inl}(\langle a, c \rangle) & \text{if } b = \text{inl}(c)
\end{cases}$$
Putting these together
\[ ((X \Rightarrow (Y + E)) + E) \times (X + E) \]
\[ \xrightarrow{\text{rst}} ((X \Rightarrow (Y + E)) \times (X + E)) + E \]
\[ \xrightarrow{1st+E} ((X \Rightarrow (Y + E)) \times X) + E + E \]
\[ \xrightarrow{\mu} ((X \Rightarrow (Y + E)) \times X) + E \]
\[ \xrightarrow{\text{ap+E}} (Y + E) + E \]
\[ \xrightarrow{\mu} Y + E \]
Summarising, one needs a operator $T(X)$ on $\textbf{Set}$, the category of sets, equipped with:
- A functorial action $(X \Rightarrow Y) \xrightarrow{T(\cdot)} (T(X) \Rightarrow T(Y))$
This makes $T$ a functor
- A unit $X \xrightarrow{\eta_X} T(X)$
- A multiplication $T(T(X)) \xrightarrow{\mu_X} T(X)$
These make $T$ a monad
- A (left) strength $X \times T(Y) \xrightarrow{\text{st}_{X,Y}} T(X \times Y)$
This makes $T$ a strong monad
Note, can derive the right strength:
$T(X) \times Y \xrightarrow{\text{twist}} Y \times T(X) \xrightarrow{\text{st}_{Y,X}} T(Y \times X) \xrightarrow{T(\text{twist})} T(X \times Y)$
Semantics of application for any monad
\[ T(X \Rightarrow T(Y)) \times T(X) \stackrel{\text{rst}}{\longrightarrow} T((X \Rightarrow T(Y)) \times T(X)) \]
\[ T(1_{\text{st}})(T((X \Rightarrow T(Y)) \times X)) \]
\[ \mu \quad \longrightarrow \quad T((X \Rightarrow T(Y)) \times X) \]
\[ T(\text{ap})(T(Y)) \]
\[ \mu \quad \longrightarrow \quad T(Y) \]
Moggi’s insight (cntnd.)
Other effects can also be modelled by strong monads $T$, e.g.:
- **State**: functions $S \times X \xrightarrow{f} S \times Y$ can be rewritten as $X \xrightarrow{g} (S \times Y)^S$, and $T_{\text{state}}(X) = (S \times X)^S$ is a strong monad.
- **Finite Nondeterminism**: $T_{\text{SL}}(X) = \mathcal{F}^+(X)$ the collection of non-empty finite subsets of $X$.
- **Continuations**: Functions $R^Y \xrightarrow{f} R^X$ can be rewritten as $X \xrightarrow{g} T_{\text{cont}}(Y)$, where $T_{\text{cont}}(X) = (X \Rightarrow R) \Rightarrow R$.
- **Selection**: $T_{\text{sel}}(X) = (X \Rightarrow R) \Rightarrow X$ (Escardó and Oliva).
and there are many other examples. They include combinations, such as this for state plus exceptions:
$$T(X) = (S \times (X + E))^S$$
In Cpo one has similar examples. They include lifting to model recursion, e.g., for state $+$ nontermination: $T(P) = \left( (S \times P)_{\perp} \right)^S$
Outline
1. Moggi’s Monads As Notions of Computation
2. Algebraic Effects
- Introduction
- Equational theories
- Finitary equational theories
- Algebraic operations and generic effects
3. Prospectus and Exercises
Outline
1. Moggi’s Monads As Notions of Computation
2. Algebraic Effects
- Introduction
- Equational theories
- Finitary equational theories
- Algebraic operations and generic effects
3. Prospectus and Exercises
Two questions
- How do effects arise, i.e., how do we “construct” them in a programming language?
- Answering that leads to understanding where (most of) Moggi’s monads come from.
An example: finite nondeterminism
Take $T_{SL}(X) = \mathcal{F}^+(X)$ the collection of non-empty finite subsets of $X$.
To create the effects we add an **effect constructor** to the language:
$$
\frac{M : \sigma \quad N : \sigma}{M + N : \sigma}
$$
with semantics
$$
\llbracket M + N \rrbracket(\gamma) = \llbracket M \rrbracket(\gamma) \cup \llbracket N \rrbracket(\gamma)
$$
So $T_{SL}(X)$ is an **algebra** (when equipped) with the binary operation $\cup : T_{SL}(X) \times T_{SL}(X) \to T_{SL}(X)$. But which algebra is it?
There is a natural equational theory, with signature $+: 2$, and set of axioms $\text{SL}$ (for semilattices) given by:
- **Associativity** $$(x + y) + z = x + (y + z)$$
- **Commutativity** $$x + y = y + x$$
- **Absorption** $$x + x = x$$
The above algebra on $\mathcal{F}^+(X)$ satisfies these equations, interpreting $+$ as $\cup$.
Further:
$$\mathcal{F}^+ \text{ is the free algebra monad}.$$
Freeness
For any semilattice \( S \), and any function \( f : X \to S \) there is a unique homomorphism \( f^\dagger : \mathcal{F}^+(X) \to S \) such that the following diagram commutes:
\[
\begin{array}{c}
X \\
\downarrow \eta \\
\mathcal{F}(X) \\
\downarrow f^\dagger \\
S
\end{array}
\]
where
1. \( \eta(x) = \{x\} \)
2. \( f^\dagger(\{x_1, \ldots, x_n\}) = f(x_1) + \ldots + f(x_n) \)
when: \( \mathcal{F}(X) \) is a strong monad with unit \( \eta \), multiplication \( (\text{id}_{\mathcal{F}(X)})^\dagger \).
Is this the right set of axioms for nondeterminism?
- An equational theory is *equationally inconsistent* if it proves \( x = y \).
- An equational theory is *Hilbert-Post complete* if adding an unprovable equation makes it equationally inconsistent.
**Theorem**
*SL* is Hilbert-Post complete.
**Proof.**
Let \( t = u \) be an unprovable equation, and assume it. Then there is a variable \( x \) in one of \( t \) or \( u \), but not in the other. Equating all the other variables to \( y \) one obtains one of the following two equations: \( x = y \) or \( x + y = y \). One obtains \( x = y \) from either of these.
Other effects
- Similar results hold in **Set** for, eg, exceptions, (global) state; I/O; (probabilistic) nondeterminism; and combinations thereof (but may not get HP completeness).
- May need infinitary algebra and parameterised operations.
Further
- Works similarly for **Cpo** but also need inequations $t \leq u$.
- For **locality**, e.g., new variables and fresh names, one uses categories of presheaves.
Outline
1. Moggi’s Monads As Notions of Computation
2. Algebraic Effects
- Introduction
- Equational theories
- Finitary equational theories
- Algebraic operations and generic effects
3. Prospectus and Exercises
Finitary equational theories: syntax
- **Signature** $\Sigma_e = (\text{Op}, \text{ar} : \text{Op} \to \mathbb{N})$. We write $\text{op} : n$ for arities.
- **Terms** $t ::= x \mid \text{op}(t_1, \ldots, t_n) \ (\text{op} : n)$. We leave open what the set $\text{Var}$ of variables is; this will prove useful.
- **Equations** $t = u$
- **Axiomatisations** Sets $\text{Ax}$ of equations
- **Deduction** $\text{Ax} \vdash t = u$
- **Theories** Sets of equations $\text{Th}$ closed under equational deduction
Finitary equational theories: equational deduction rules
**Axiom**
\[ \text{Ax} \vdash t = u \quad \text{(if } t = u \in \text{Ax)} \]
**Equivalence**
\[ \frac{\text{Ax} \vdash t = u \quad \text{Ax} \vdash u = v}{\text{Ax} \vdash u = v} \quad \frac{\text{Ax} \vdash t = u}{\text{Ax} \vdash u = t} \]
**Congruence**
\[ \frac{\text{Ax} \vdash t_i = u_i \quad (i = 1, n)}{\text{Ax} \vdash f(t_1, \ldots, t_n) = f(u_1, \ldots, u_n)} \quad \text{(for } f : n) \]
**Substitution**
\[ \frac{\text{Ax} \vdash t = u}{\text{Ax} \vdash t[v/x] = u[v/x]} \]
Addition to $\lambda$-calculus syntax
$$
\frac{
\Gamma \vdash M_1 : \sigma, \ldots, \Gamma \vdash M_n : \sigma
}{
\Gamma \vdash \text{op}(M_1, \ldots, M_n) : \sigma
}
\quad (\text{op} : n)
$$
Algebras $A = (A, \text{op}_A : A^n \rightarrow A \ (\text{op} : n))$
Homomorphisms $h : A \rightarrow B$ are functions $h : A \rightarrow B$ such that, for all $\text{op} : n$, and $a_1, \ldots, a_n \in A$:
$$h(\text{op}_A(a_1, \ldots, a_n)) = \text{op}_B(h(a_1), \ldots, h(a_n))$$
Denotation $A[t](\rho)$, where $\rho : \text{Var} \rightarrow A$.
$A[x](\rho) = \rho(x)$ \hspace{1cm} $A[\text{op}(t_1, \ldots, t_n)](\rho) = \text{op}_A(A[t_1](\rho), \ldots, A[t_n](\rho))$
Validity $A \models t = u$
Models $A$ is a model of $Ax$ if $A \models t = u$, for all $t = u$ in $Ax$.
The free algebra monad $T_{Ax}$ of an axiomatic theory $Ax$
The free model $F_{Ax}(X)$ of $Ax$ over a set $X$ is the algebra with carrier:
$$T_{Ax}(X) \overset{\text{def}}{=} \{ [t]_{Ax} \mid t \text{ is a term with variables in } X \}$$
where $[t]_{Ax} \overset{\text{def}}{=} \{ u \mid Ax \vdash t = u \}$.
Its operations are given by:
$$\text{op}_{F_{Ax}(X)}([t_1], \ldots, [t_n]) = [\text{op}(t_1, \ldots, t_n)] \quad (\text{op} : n)$$
Freeness
For any model $\mathcal{A}$ of $\mathcal{A}x$, and any function $f : X \rightarrow A$ there is a unique homomorphism $f^\dagger : F_{\mathcal{A}x}(X) \rightarrow \mathcal{A}$ such that the following diagram commutes:
$$
\begin{array}{c}
X \\
\downarrow \eta \\
T_{\mathcal{A}x}(X) \\
\downarrow f^\dagger \\
A
\end{array}
$$
where:
1. $\eta = \text{def} \ x \mapsto [x]_{\mathcal{A}x}$
2. $f^\dagger([t]) = \mathcal{A}[t](f)$
when: $T_{\mathcal{A}x}(X)$ is a strong monad with unit $\eta$, multiplication $(\text{id}_{T_{\mathcal{A}x}(X)})^\dagger$. Above only characterises $F_{\mathcal{A}x}$ up to (algebraic) isomorphism (e.g., consider SL).
Another example: exceptions
Given a (possibly infinite) set $E$ of exceptions, the signature has nullary operation symbols:
$$\text{raise}_e \quad (e \in E)$$
The set of axioms $\text{Exc}$ is empty, and one obtains the usual exceptions monad
$$T_{\text{Exc}}(X) = X + E$$
There is then a puzzle: how do exception handlers fit into the algebraic theory of effects - more later!
Yet another example: one boolean location
**Signature:** write\textsubscript{true}, write\textsubscript{false} : 1; read : 2.
- Read write\textsubscript{b}(x) as “write b, and continue with x”
- Read read(x, y) as “read the boolean variable, continuing with x, if true, and with y otherwise”.
**Axioms** The set of axioms BoolState
\[
\begin{align*}
\text{read}(x, x) &= x \\
\text{read}(\text{read}(w, x), \text{read}(y, z)) &= \text{read}(w, z) \\
\text{write}_b(\text{write}_{b'}(x)) &= \text{write}_{b'}(x) \\
\text{read}(\text{write}_{\text{true}}(x), \text{write}_{\text{false}}(y)) &= \text{read}(x, y) \\
\text{write}_{\text{true}}(\text{read}(x, y)) &= \text{write}_{\text{true}}(x) \\
\text{write}_{\text{false}}(\text{read}(x, y)) &= \text{write}_{\text{false}}(y)
\end{align*}
\]
One boolean location (cntnd)
The monad is:
\[ T_{\text{bool}}(X) = (T \times X)^T \]
The unit is \( \eta(x)(b) = (b, x) \)
Yet another example: probabilistic computation
**Signature:** binary operations $+_p$ for $p \in [0, 1]$.
Read $+_p$ as ‘do $x$ with probability $p$ and $y$ with probability $1 - p$’
**Axioms:** the barycentric algebra axioms:
- **One** $x +_1 y = x$
- **ID** $x +_r x = x$
- **SC** $x +_r y = y +_1 - r x$
- **SA** $(x +_p y) +_r z = x +_{pr} (y +_{r(1-p)} z)$ $(r < 1, p < 1)$
A consequence:
**COM** $(x +_r u) +_s (v +_r y) = (x +_s v) +_r (u +_s y)$
Yet another example: probabilistic computation
The monad is the set of finite probability distributions over $X$
$$T_{\text{prob}}(X) = D_\omega(X) = \text{def} \left\{ \sum_{i=1}^{n} \lambda_i \delta_{x_i} \mid n \geq 0, \lambda_i \geq 0, \sum_i \lambda_i = 1 \right\}$$
and the unit is $\eta(x) = \delta_x$ the Dirac probability distribution on $x$.
Prob is not HP complete, but its only proper equational extension is (equivalent to) SL, the theory of semilattices (this is a non-trivial result). These have an associative, commutative, idempotent binary operator.
Prob can be alternatively axiomatised using $n$-ary operations $\sum_{n,p_1,\ldots,p_n}$ for all $n \geq 0$ and $n$-tuples of nonnegative reals $p_1, \ldots, p_n$ summing to 1. The axioms are:
1. $\sum_{i=1,m} \delta_j^i x_i = x_j$
2. $\sum_{i=1,m} p_i \sum_{j=1,n} q_{ij} x_j = \sum_{j=1,n} (\sum_{i=1,m} p_i q_{ij}) x_j$
where Kronecker’s $\delta_j^i$ is 1 if $i = j$, and 0 otherwise.
Algebraic semantics
Form of semantics
\[
\Gamma \vdash M : \sigma \\
\Downarrow \quad [\Gamma] \xrightarrow{T_{Ax}(\sigma)} [M]
\]
Abstraction: Typing
\[
\Gamma, x : \sigma \vdash M : \tau \\
\Gamma \vdash \lambda x : \sigma. M : \sigma \rightarrow \tau
\]
Abstraction: Semantics
\[
[M](\gamma) = [a \in \sigma] \mapsto [M](\gamma, a)_{Ax}
\]
Algebraic semantics (cntnd.)
Application: Typing
\[
\Gamma \vdash M : \sigma \to \tau \quad \Gamma \vdash N : \sigma \\
\hline
\Gamma \vdash MN : \tau
\]
Application: Semantics
Suppose
\[
[M] (\gamma) = [t(f_1, \ldots, f_m)]_{Ax} \\
[N] (\gamma) = [u(a_1, \ldots, a_n)]_{Ax} \\
f_i(a_j) = [v_{ij}]_{Ax}
\]
Then
\[
[MN] (\gamma) = [t(u(v_{11}, \ldots, v_{1n}), \ldots, u(v_{m1}, \ldots, v_{mn}))]_{Ax}
\]
Fix a finitary equational axiomatic theory $Ax$. Then for any set $X$ and operation symbol $\text{op} : n$ we have the function:
$$T_{Ax}(X)^n \xrightarrow{\text{op}_F_{Ax}(X)} T_{Ax}(X)$$
Further for any function $f : X \rightarrow T_{Ax}(Y)$, $f^\dagger$ is a homomorphism:
$$T_{Ax}(X)^n \xrightarrow{\text{op}_F_{Ax}(X)} T_{Ax}(X)$$
$$\left( (f^\dagger)^n \right) = f^\dagger$$
$$T_{Ax}(Y)^n \xrightarrow{\text{op}_F_{Ax}(Y)} T_{Ax}(Y)$$
We call such a polymorphic family of functions $T_{Ax}(X)^n \xrightarrow{\varphi_{X}} T_{Ax}(X)$ algebraic.
Evaluation contexts are given by:
\[ \mathcal{E} ::= [\cdot] | \mathcal{E}N | (\lambda x : \sigma. M)\mathcal{E} \]
For any operation symbol \( \text{op} : n \) we have:
\[ \models \mathcal{E}[\text{op}(M_1, \ldots, M_n)] = \text{op}(\mathcal{E}[M_1], \ldots, \mathcal{E}[M_n]) \]
For example, with \( \mathcal{E} \) alternatively of the forms \([\cdot]N\) or \((\lambda x : \sigma. N)[\cdot]\)
\[ \models (\text{raise}_e())(N) = \text{raise}_e() \]
\[ \models (\lambda x : \sigma. M)\text{raise}_e() = \text{raise}_e() \]
\[ \models (M +_p M')N = (MN) +_p (M'N) \]
\[ \models (\lambda x : \sigma. M)(N +_p N') = (\lambda x : \sigma. M)N +_p (\lambda x : \sigma. M)N' \]
Generic effects
Given an algebraic family $T_{Ax} (X)^n \xrightarrow{\varphi_X} T_{Ax} (X)$, regarding $n$ as $\{0, \ldots, n-1\}$, and setting $X = n$, we obtain the generic effect:
$e \in T_{Ax} (n) = \varphi_n (\eta_n)$
Given $e \in T_{Ax} (n)$ we obtain such an algebraic family by setting:
$\varphi_X = \text{def} \quad T_{Ax} (X)^n \xrightarrow{\cdot (\cdot)^\dagger} T_{Ax} (X) T_{Ax} (n) \xrightarrow{f \mapsto f(e)} T_{Ax} (X)$
This correspondence is a bijection between algebraic families and generic effects.
Noting that $T_{Ax} (n)$ is the collection of (equivalence classes) of terms with $n$ free variables, we see (following the above definition) that the algebraic families are exactly the definable ones.
Examples
- **Nondeterminism** Corresponding to \(+\) we have:
\[ \text{arb} \in T_{SL}(\{0, 1\}) = \{0, 1\} \]
which can be thought of as the (equivalence class of) the term \(0 + 1\).
- **Probabilistic nondeterminism** Corresponding to \(+\) we have:
\[ \text{coin}_p \in T_{SL}(\{0, 1\}) = p\delta_0 + (1 - p)\delta_1 \]
which can be thought of as the (equivalence class of) the term \(0 +_p 1\).
- **Exceptions** Roughly \(\text{raise}_e : 0\) is its own generic effect. Precisely, to the family \((\text{raise}_e) : +_E : 1 = (X + E)^0 \rightarrow X + E\) corresponds \(\text{inr}(e) \in \emptyset + E\), which we can identify as \(e\).
Outline
1. **Moggi’s Monads As Notions of Computation**
2. **Algebraic Effects**
- Introduction
- Equational theories
- Finitary equational theories
- Algebraic operations and generic effects
3. **Prospectus and Exercises**
Some things that have been done so far
- Calculi with effects, such as $\lambda_c$ and CBPV. (Moggi; Levy; Egger, Mogelberg & Simpson)
- (Moderately) general operational semantics. May not get expected op. sems., eg, state. (P. & Power; Kammar et al)
- Work on general notions of observation and full abstraction. (Johann, Simpson & Voigtlander)
- Theory, and application, of effect deconstructors, such as exception handlers via not necessarily free algebras. (P. & Pretnar; Bauer & Pretnar; Kammar, Oury & Lindley)
- Combining monads in terms of combining theories, primarily sum and tensor. (Hyland, P. & Power)
- Work on combining algebraic effects with continuations, which are not algebraic and require special treatment. (Hyland, Levy, Power & P.)
- First thoughts on a general logic of effects; connects with modal logic. Does not give Hoare logic. (P. & Pretnar)
- Type and effect systems. (Kammar & P.; Katsumata; Bauer & Pretnar)
- Work on locality and effects (Staton; Melliès; P. & Power; Power).
- Algebraic accounts of control (Fiore & Staton)
Exercise 1: An extension of the typed $\lambda$-calculus
Consider the following extension of the $\lambda$-calculus.
Raw Syntax
Types
$\sigma ::= \text{bool} \mid \text{unit} \mid \sigma \times \tau \mid \sigma \to \tau$
Terms
$M ::= \text{true} \mid \text{false} \mid \text{if } A \text{ then } M \text{ else } N \mid x \mid \ast \mid (M, N) \mid \text{fst}(M) \mid \text{snd}(M) \mid \lambda x : \sigma. M \mid MN \mid \text{op}(M_1, \ldots, M_n)$
Informal understanding of the language.
- $\text{bool}$ has two values $\text{true}$ and $\text{false}$. The conditional uses them.
- $\text{unit}$ has one value $\ast$.
- $\sigma \times \tau$ consists of pairs of elements of $\sigma$ and $\tau$, given by $(M, N)$ and accessed by $\text{fst}$ and $\text{snd}$.
The Exercise
- Write down typing rules for this language
- Give it a monadic semantics
- Give it an algebraic semantics
- Define evaluation contexts for this language.
- Prove that evaluation contexts commute with operations. (That is, show the equality given above for the relation between evaluation contexts and operations holds in the semantics.)
Exercise 2: Looking at monads
Recall the claimed example monads covered in the lecture:
**State**
\[ T_{\text{state}}(X) = (S \times X)^S \quad T_{\text{bool}}(X) = (T \times X)^T \]
**Finite Nondeterminism**
\[ T_{\text{SL}}(X) = \mathcal{F}^+(X) \]
**Continuations**
\[ T_{\text{cont}}(X) = R^{R^X} \]
**State plus exceptions**
\[ T(X) = (S \times (X + E))^S \]
**Finite Probabilistic Nondeterminism**
\[ T_{\text{prob}}(X) = \mathcal{D}_\omega(X) \]
Exercise
1. Choose a monad or two (I recommend state) and find
- their functorial actions
- their unit and multiplication,
- their (left) strength
- the algebraic structure of $T(X)$ (for example $+ : T_{SL}(X)^2 \rightarrow T_{SL}(X)$ is union).
2. (More advanced)
- Show that every monad on $\textbf{Set}$ has a unique (left or right) strength.
- Show that $T_{\text{cont}}$ is not given by any algebraic theory.
Exercise 3: Free Algebra Monads
Show that $T_{Ax}$ is indeed a strong monad, i.e.:
- Find its functorial action.
- Find its unit (well that was given) and its multiplication.
- Find its strength (Hint: if $A$ is an algebra then $A^X$ becomes one too).
Exercise 4: Identifying monads
It is claimed above that some monads are the free algebra monads associated to certain equational theories. Prove this!
**Method** We know the monads are given, up to isomorphism, by the equational logic as free algebra monads. So:
- Associate to every term $t$ a normal form $|t|$ so that $\vdash_{Ax} t = u$ iff $|t| = |u|$.
- Inspect the normal forms to see how they correspond to elements of $T(\text{Var})$.
- For example, for semilattices, every term can be proved equal to one of the form $x_1 + \ldots + x_n$ (ignoring brackets) with no $x_i$ repeated, and the $x_i$ taken in order according to some fixed enumeration of the variables.
- (If you are trying probability, use the alternative theory.)
Exercise 5: HP completeness
Show that $\text{BoolState}$ is Hilbert Post complete. (Hint: see what happens if you assume two different normal forms equal).
The axiomatisation $\text{BoolState}$ is redundant. That you can derive some of the axioms from the others. Which ones are they?
Exercise 6: Probability
Show the two axiomatisations are equivalent. That is, one can define each in terms of the other. More precisely:
- In one direction, define $x +_p y$ to be $px + (1 - p)y$ then, using the alternative axiomatisation, show that all the barycentric axioms are derivable.
- In the other direction define the operations $\sum_{n,p_1,\ldots,p_n}$ in terms of the barycentric operations, and show all the axioms of the alternative theory are derivable.
- Then show the compositions of the two translations are provably equal in the relevant theories to the identity translation.
(The definition of translations between equational theories is in the next lecture.)
Prove Neumann’s Lemma, which is that if you add one equation of the form $x + p \cdot y = x + q \cdot y$ with $0 < p < q < 1$ to $\text{Prob}$ then one can derive all such equations (this is not so easy).
Show that the only nontrivial extension of $\text{Prob}$ is $\text{SL}$ the theory of semilattices.
**Method:** show that if you add one equation between two different normal forms then the resulting theory is either equationally inconsistent or else it is intertranslatable with $\text{SL}$; you will need Neumann’s lemma.
|
{"Source-Url": "https://effect-handlers.org/static/shonan146/lecture_1_alg_effects_I.pdf", "len_cl100k_base": 8837, "olmocr-version": "0.1.53", "pdf-total-pages": 54, "total-fallback-pages": 0, "total-input-tokens": 99077, "total-output-tokens": 11287, "length": "2e13", "weborganizer": {"__label__adult": 0.0004925727844238281, "__label__art_design": 0.0006451606750488281, "__label__crime_law": 0.0004818439483642578, "__label__education_jobs": 0.0083770751953125, "__label__entertainment": 0.00013971328735351562, "__label__fashion_beauty": 0.00025963783264160156, "__label__finance_business": 0.0003614425659179687, "__label__food_dining": 0.0007953643798828125, "__label__games": 0.0009636878967285156, "__label__hardware": 0.001094818115234375, "__label__health": 0.0009617805480957032, "__label__history": 0.0005574226379394531, "__label__home_hobbies": 0.0002689361572265625, "__label__industrial": 0.0011720657348632812, "__label__literature": 0.0010156631469726562, "__label__politics": 0.0005011558532714844, "__label__religion": 0.0010585784912109375, "__label__science_tech": 0.1199951171875, "__label__social_life": 0.0003020763397216797, "__label__software": 0.006103515625, "__label__software_dev": 0.8525390625, "__label__sports_fitness": 0.0004677772521972656, "__label__transportation": 0.0011386871337890625, "__label__travel": 0.0003020763397216797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25493, 0.00527]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25493, 0.6599]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25493, 0.63994]], "google_gemma-3-12b-it_contains_pii": [[0, 244, false], [244, 385, null], [385, 616, null], [616, 847, null], [847, 1142, null], [1142, 1583, null], [1583, 1920, null], [1920, 2559, null], [2559, 3127, null], [3127, 3959, null], [3959, 4576, null], [4576, 4963, null], [4963, 5325, null], [5325, 5952, null], [5952, 6307, null], [6307, 7264, null], [7264, 7493, null], [7493, 7722, null], [7722, 7904, null], [7904, 8439, null], [8439, 8839, null], [8839, 9359, null], [9359, 9982, null], [9982, 10395, null], [10395, 10626, null], [10626, 11133, null], [11133, 11686, null], [11686, 11879, null], [11879, 12464, null], [12464, 12909, null], [12909, 13568, null], [13568, 13951, null], [13951, 14747, null], [14747, 14873, null], [14873, 15332, null], [15332, 15688, null], [15688, 16295, null], [16295, 16644, null], [16644, 17055, null], [17055, 17610, null], [17610, 18287, null], [18287, 19015, null], [19015, 19671, null], [19671, 19915, null], [19915, 20975, null], [20975, 21745, null], [21745, 22097, null], [22097, 22561, null], [22561, 22996, null], [22996, 23250, null], [23250, 23991, null], [23991, 24278, null], [24278, 24962, null], [24962, 25493, null]], "google_gemma-3-12b-it_is_public_document": [[0, 244, true], [244, 385, null], [385, 616, null], [616, 847, null], [847, 1142, null], [1142, 1583, null], [1583, 1920, null], [1920, 2559, null], [2559, 3127, null], [3127, 3959, null], [3959, 4576, null], [4576, 4963, null], [4963, 5325, null], [5325, 5952, null], [5952, 6307, null], [6307, 7264, null], [7264, 7493, null], [7493, 7722, null], [7722, 7904, null], [7904, 8439, null], [8439, 8839, null], [8839, 9359, null], [9359, 9982, null], [9982, 10395, null], [10395, 10626, null], [10626, 11133, null], [11133, 11686, null], [11686, 11879, null], [11879, 12464, null], [12464, 12909, null], [12909, 13568, null], [13568, 13951, null], [13951, 14747, null], [14747, 14873, null], [14873, 15332, null], [15332, 15688, null], [15688, 16295, null], [16295, 16644, null], [16644, 17055, null], [17055, 17610, null], [17610, 18287, null], [18287, 19015, null], [19015, 19671, null], [19671, 19915, null], [19915, 20975, null], [20975, 21745, null], [21745, 22097, null], [22097, 22561, null], [22561, 22996, null], [22996, 23250, null], [23250, 23991, null], [23991, 24278, null], [24278, 24962, null], [24962, 25493, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25493, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 25493, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25493, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25493, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25493, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25493, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25493, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25493, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25493, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25493, null]], "pdf_page_numbers": [[0, 244, 1], [244, 385, 2], [385, 616, 3], [616, 847, 4], [847, 1142, 5], [1142, 1583, 6], [1583, 1920, 7], [1920, 2559, 8], [2559, 3127, 9], [3127, 3959, 10], [3959, 4576, 11], [4576, 4963, 12], [4963, 5325, 13], [5325, 5952, 14], [5952, 6307, 15], [6307, 7264, 16], [7264, 7493, 17], [7493, 7722, 18], [7722, 7904, 19], [7904, 8439, 20], [8439, 8839, 21], [8839, 9359, 22], [9359, 9982, 23], [9982, 10395, 24], [10395, 10626, 25], [10626, 11133, 26], [11133, 11686, 27], [11686, 11879, 28], [11879, 12464, 29], [12464, 12909, 30], [12909, 13568, 31], [13568, 13951, 32], [13951, 14747, 33], [14747, 14873, 34], [14873, 15332, 35], [15332, 15688, 36], [15688, 16295, 37], [16295, 16644, 38], [16644, 17055, 39], [17055, 17610, 40], [17610, 18287, 41], [18287, 19015, 42], [19015, 19671, 43], [19671, 19915, 44], [19915, 20975, 45], [20975, 21745, 46], [21745, 22097, 47], [22097, 22561, 48], [22561, 22996, 49], [22996, 23250, 50], [23250, 23991, 51], [23991, 24278, 52], [24278, 24962, 53], [24962, 25493, 54]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25493, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
a4922429b4558729358e7e8aa3548c18bb8fb1f5
|
Enhancing LOD Complex Query Building with Context
Ricardo Brandão, Paulo Maio and Nuno Silva
Knowledge Engineering and Decision Support Research Center
School of Engineering, Polytechnic of Porto
Porto, Portugal
{jrmjb, pam, nps}@isep.ipp.pt
Abstract—Open ontology-described repositories are becoming very common in the web and in enterprises. These repositories are well-suited to answer complex queries, but in order to fully exploit their potential, the queries should be written in a user-demand basis, and not in a traditional static approach by software developers. Hence, the users are required (i) to know the underlying ontology(ies) and (ii) to write formal queries. Yet, the users often lack such requirements. In this paper we first describe the observations made during manual complex querying process and present a systematization of the users’ support wish list for building complex queries. Based on this systematization we propose an extended set of functionalities for a user-supporting system. Finally, we demonstrate their application in a walk-through example and their implementation within a prototype.
Keywords - ontology; complex questions; knowledge management;
I. INTRODUCTION
Open information repositories described by means of ontologies are becoming very common both in the web (e.g. DBPedia) and enterprises (e.g. World Search). The World Search [1] project aims to create a system that supports healthcare professionals such as nurses, geriatricians and physiatrists finding information that answer complex questions evolving many concepts (not always explicitly stated in the ontology/schema), relations and repositories.
These repositories are often referred as Linked-Open Data repositories, or simply LOD. Due to its well-formed structure (and sometimes semantics), a new set of applications and demands are rising. However, (i) the schemes of these repositories are very dynamic, (ii) the requirements and semantics put upon the schemas and upon the data are heterogeneous, dynamic and potentially unlimited, hence rising operational issues.
In this paper, we focus on supporting fully exploitation of these repositories, such that the questions are written in a user-demand basis, and not made available by the developer. In particular, we intend to support the users in building complex query, as they often lack fundamental required skills: (i) to know the underlying schemes/ontologies and (ii) to specify the queries formally (e.g. by means of query language).
This paper starts by reporting the team’s investigations regarding the ways users make use of these repositories when trying to respond to complex questions, and (ii) how they expect to be supported on that task (section II). Observations showed that the repositories’ front-ends do not provide enough support for the task. Based on these findings, we propose a set of functionalities that should be included in a complex-query supporting front-end (section III). A walk-through example illustrates the application of these functionalities (section IV). Section V describes the architecture and implementation decisions in the context of a larger effort to build a healthcare-related semantic repository. Finally, we draw some final remarks and describe the follow-up research directions (section VI).
II. INVESTIGATION
Because the domain knowledge of the World Search project is not mainstream and therefore often incomprehensible for non-experts, we decided to investigate the complex query building task with a common-sense question from the DBPedia [2]: “All soccer players, who played as goalkeeper for a club that has a stadium with more than 40.000 seats and who are born in a country with more than 10 million inhabitants”. Obviously this question is not answerable through “search, browsing & navigation” operations without further data processing.
A possible UML representation of the underlying ontology of the question is depicted in Fig. 1. Notice that all the concepts, relations, attributes and values present in the aforementioned question are included in the ontology.

According to the DBPedia, the ontology module capable to answer this question is slightly different (Fig. 2).
In order to evaluate current search tools, we conducted a trial with 6 users. The participants, mainly informatics students were very familiar with the usage of the general purpose search engines (e.g. Google, Bing) and faceted search. Three of them were familiar with the concept of ontology/schema, triples, LOD and SPARQL. None of them were familiar with the DBPedia repository and its schema/ontology.
We asked the users to answer the question making use of:
- Faceted Wikipedia Search [3], allows users to ask complex queries against Wikipedia based on the schema/ontology and information extracted from many different Wikipedia articles;
- RelFinder [4], a user-centered tool for interactively discover relationships between elements in the Semantic Web;
- gFacet [5] which is a front-end that allows formulating complex queries supporting the navigation through the ontologies;
- any other tool the participants wants.
Further, two restrictions were stated: (i) the maximum duration (75 minutes) and (ii) the location (in the lab) so that the team could witness the efforts and the adopted approaches. In the end, no participant succeeded.
Based on this result the team re-ran the experiment but this time the examiners supported the participants with some guidance and explanations. After the task completion the participants were inquired about their wish list for supporting complex query formulation. The lists were summarized in the following:
- Help find the attributes names whose values include a specific value;
- Provide abstractions for class expressions (e.g. +10MPopulationCountry = Country \( \cap \) 3. populationtotal\( ( > 10M ) \))
- Abstract the subsumption relation (e.g. the Country\( \rightarrow \)PopulatedPlace\( \rightarrow \)Place), such that it is possible to filter the individual of the super class to those of the subclass;
- Reduce the size of the proposed ontology entities, based on the current query context (defined by the ontology entities already selected);
- Combine all these requirements into a single tool.
III. PROPOSAL
In this section we describe our proposal independently from any concrete implementation or application domain. The proposal relies on a set of six high-level components that are suitable combined into a user application. Through this application the user interacts and exploits the functionalities of such components iteratively and arbitrarily in order to obtain a set of results to the complex question s/he is trying to answer. These components are summarized as follows.
A. Ontology-based Repository
The Ontology-based Repository component is where the data/information/knowledge necessary to answer the user complex questions is maintained. Thus, an ontology-based repository can be seen as an ontology. Ontologies may be expressed in a variety of ontology languages (e.g. OWL [6]) that define the ontology entities. Fortunately, most of these languages share the same kind of entities, often with different names but comparable interpretations. Thus, an ontology can be minimally defined as follows.
**Definition 1 (Ontology)** – An ontology \( O \) (also known as knowledge base) is a tuple \( O = (T, A) \) where \( T \) is the terminological axioms and \( A \) is the assertional axioms. Both are defined based on a structured vocabulary \( V = (C, R) \) comprised of concepts (or classes) \( C \) and roles (or properties) \( R \). Concepts (and roles) axioms are of the form \( C \subseteq D \ (R \subseteq S) \) or \( C \equiv D \ (R \equiv S) \) such that \( C, D \in \mathcal{C}, \ R, S \in \mathcal{R} \). For a set of individuals \( J \), concepts and roles assertions are of form \( C(a) \) or \( R(b, c) \) such that \( a, b, c \in J \).
Yet, it is worth mentioning that ontology entities are not necessarily named. In fact, ontology entities can be constructed out of other entities. As an example, a concept may be created out of a restriction of a role. Moreover, ontology languages are often extended by entity languages adding operators (e.g. concatenation of strings) to manipulate the ontology entities.
The semantics related to an ontology is provided by an interpretation \( \mathcal{I} \) over a domain \( \Delta \) such that it maps: (i) the elements of the domain to the ontology instances, (ii) the subsets of the domain to the ontology concepts, and (iii) the binary relations on the domain to the ontology roles.
Finally, information on ontology-based repositories is typically accessible through a query language such as SPARQL [7].
B. Contextualization
The Contextualization component is responsible for providing one or more relevant fragments of the ontology describing the repository according to a desired level of abstraction. In that sense, it is envisaged the adoption and combination of three modularization techniques [8]: (i) ontology partitioning, (ii) module extraction and (iii) ontology summarization. Next, each one of these techniques is described according to their goals.
An ontology partitioning technique identifies the key topics of an ontology and splits it into several fragments [9].
Typically, each key topic gives rise to a fragment which is usually called as module (cf. Definition 2).
**Definition 2 (Module)** – A module $M$ of an ontology $O = (T', A')$ is defined as $M(O) = (T', A')$, where $T' \subseteq T$ and $A' \subseteq A$ are the axioms dealing with (i) concepts $C'$, (ii) roles $R'$ and (iii) individuals $I'$ such that: (a) $C' \subseteq C$, (b) $R' \subseteq R$ and (c) $I' \subseteq I$ respectively. Accordingly, an ontology module is per se an ontology too.
A partition technique is formalized as follows.
**Definition 3 (Ontology Partitioning)** – The Partitioning task is seen as a function $p: O \rightarrow P$ where an ontology $O$ is splitted into a set of modules $P$ with $N$ elements (modules) such that $P = \{O_1, O_2, ..., O_N\}$.
A module extraction technique aims to extract a focused fragment (or module) of the original ontology given a specific topic of interest [10]. The topic of interest is captured by the notion of signature (cf. Definition 4).
**Definition 4 (Signature)** – A signature $S$ to extract a module $M = (T', A')$ from $O = (T, A)$ is defined as $S(O) = (T'', A'')$ where $T'' \subseteq T$ and $A'' \subseteq A'$ are the axioms (concepts $C''$, roles $R''$ and individuals $I''$) specifying the context of the module to be extracted such that: $C'' \subseteq C' \subseteq C$, $R'' \subseteq R' \subseteq R$ and $I'' \subseteq I' \subseteq I$.
A module extraction technique is formalized as follows.
**Definition 5 (Module Extraction)** – The Module Extraction task is seen as a function $\sigma: (O, S) \rightarrow M$ where an ontology module $M$ is extracted from an ontology $O$ according to a given signature $S$.
An ontology summarization technique provides a succinct representation (or compressed version) of the ontology (referred to as summary) emphasizing the topics contained in an ontology according to visualization and navigation purposes [11],[12].
**Definition 6 (Summary)** – A summary description $D$ of an ontology $O = (T, A)$ is defined as $D(O) = (T'', A'')$ where $T'' \subseteq T$ and $A'' \subseteq A$ are the axioms specifying the concepts $C''$, the roles $R''$ and the individuals $I''$ that summarize the ontology such that: $C'' \subseteq C$, $R'' \subseteq R$ and $I'' \subseteq I$ respectively.
Ontology summarization technique is then formalized.
**Definition 7 (Ontology Summarization)** – The Ontology Summarization task is seen as a function $\varphi: O \rightarrow D$ where a description $D$ is generated to summarize the ontology $O$.
It is worth notice that from the perspective of an ontology, the notions of (i) module ($M$), (ii) signature ($S$) and (iii) summary ($D$) have similar formal definitions. However, these notions differ on their purpose and in extension (in terms of set inclusion), such that:
**C. Object Mapping**
The Object Mapping component is responsible for mapping a natural language text introduced by the user to ontological entities. For that, it is envisaged a three steps approach as follows.
The first step is completely automatic. It consists in applying a set of Natural Language Processing (NLP) techniques (e.g. tokenization, stops words, lemmatization, stemming) to identify the relevant terms in a natural language text. This step is formalized as follows.
**Definition 8 (Terms Identification)** – The terms identification task is seen as a function $nlp: text \rightarrow T$ such that $text$ refers to a set of words/phrases according to a natural language and $T$ is the set of relevant terms resulting from a linguistic interpretation of $text$.
The second step is also completely automatic. It aims to identify for each term the set of plausible ontological entities the user is interested in. This step is formalized as follows.
**Definition 9 (Mapping Terms)** – The task of mapping a set of terms ($T$) to the corresponding ontological objects is a function $map(T, O, C) \rightarrow Map$ such that:
- $T$ is a set of terminological terms;
- $O$ is the ontology describing the repository;
- $C \subseteq O$ is a sub-set of the ontology describing the repository. It define the ontological context in which the terms introduced by the user must be first considered;
- $Map$ is the set of mappings found. Each element $m \in Map$ is a triple $m = (t, e, v)$ expressing that the term $t \in T$ is mapped to the ontological entity $e \in O$ with a confidence value $v \in ]0, \infty[$. The ontological entity might be a class, a property, an individual or an axiom (e.g. a constraint).
The last step is the disambiguation. This consists of mapping each term $t \in T$ to the unique ontological entity the user is interested in based on the outcome of the previous step ($Map$). Despite the provided contextualization support, for each term $t \in T$ might exist in $Map$ more than one possible ontological entity to map. In such cases, and in order to reduce the user effort in selecting the ontological entities, a (semi-) automatic approach must be adopted.
**D. Relationship Searcher**
The Relationship Searcher component is responsible for finding the ontological relations existing between two ontological entities in a given context.
**Definition 10 (Finding Relationships)** – The task of finding relationships between two ontological entities is a function $relSearch(e, e', O, C, length, dir) \rightarrow Rel$ such that:
- $e \in O$ and $e' \in O$ are the ontological entities among which is necessary to find out the existing relationships;
- \( O \) and \( C \subseteq O \) are the ontology describing the repository and the ontological context defining the searching space respectively;
- \( \text{length} \geq 1 \) defines the maximum admissible number of entities between \( e \) and \( e' \). If \( \text{length} = 1 \), it means that only direct relationships between \( e \) and \( e' \) must be considered;
- \( \text{dir} \in \text{DIR} \) specifies directionality constraints to the relationships. As an example, considering \( \text{DIR} = \{ \text{forward}, \text{backward} \} \), one might constraint relationships to those that goes from \( e \) to \( e' \) only (forward) or vice-versa (backward);
- \( \text{Rels} \) is the set of relationships existing between \( e \) and \( e' \). Each element \( rel \in \text{Rels} \) is a tuple \( rel = (r, v) \) such that \( r \) is a relationships path either in the form of \( \{ e, e_1, e_2, ..., e_n, e' \} \) or \( \{ e', e_1, e_2, ..., e_n, e \} \) where \( n < \text{length} \) and \( v \in ]0, \infty[ \) is a value expressing the relevance of the relationships path.
E. Constraints Specification
The Constraints Specification component is responsible for supporting the user to specify constraints over the ontological entities (properties) s/he is interested in. By this means, the user is able to properly filter the retrieved results such that they meet the user’s needs and abstract (identify) such class of individuals. For that, three processes were identified.
The first process consists in determining which logical operators (e.g. equal, greater than, contains, existential and universal quantifiers) are applicable to a given ontological property considering both the existing ontological definitions (e.g. the range) and the context on which that property is being used. This process is formalized as follows.
**Definition 11 (Applicable Operators)** – The task of determining the applicable logical operators on a property is a function \( \text{opers}(p, O, C, Y) \rightarrow Y' \) such that:
- \( p \in O \) is the ontological property to retrieve the most common values;
- \( O \) and \( C \subseteq O \) are respectively the ontology describing the repository and the ontological context for determining the common values of \( p \);
- \( \text{Y} \) is the set of available logical operators in the system;
- \( \text{Y}' \subseteq \text{Y} \) is the sub-set of available logical operators that are applicable to \( p \) in the context \( C \).
The second process consists in retrieving the most common values taken for a property in a given context. This process aims to help the user on:
- Perceiving which kind of values are admissible (e.g. numerical, string, date, ontological resource);
- Typing the admissible values. This is especially helpful for (object) properties whose admissible values are ontological entities (resources) only. In this case, it eases the required object mapping task since the most common values are themselves ontological entities.
This process is formalized as follows.
**Definition 12 (Property Common Values)** – The task of determining the most common values of a property is a function \( \text{values}(p, O, C, \text{top}) \rightarrow \text{PV} \) such that:
- \( p \in O \) is the ontological property to retrieve the most common values;
- \( O \) and \( C \subseteq O \) are respectively the ontology describing the repository and the ontological context for determining the common values of \( p \);
- \( \text{top} \) determines the amount of most common values to retrieve;
- \( \text{PV} \) is the set with the top most common values of property \( p \). Each element \( pv \in \text{PV} \) is a tuple \( pv = (pv', v) \) such that \( pv' \) is a property value (either an ontological entity or a literal) and \( v \in ]0, \infty[ \) is a value expressing how common \( pv' \) is regarding to \( p \).
The third process concerns the ability to construct the appropriate constraint given (i) a property, (ii) an operator \( \text{(op} \in \text{OP}) \) and (iii) a property value \( (pv') \). This process is formalized as follows.
**Definition 13 (Constraint)** – The task of constructing a constraint on a property is a function \( \text{constraint}(p, v, pv') \rightarrow \lambda \) such that:
- \( p \in O \) is the ontological property on which the constraint applies on;
- \( v \in \text{Y} \) is the logical operator applied in the constraint;
- \( pv' \) is a property value;
- \( \lambda \) is the resulting constraint on \( p \) such that \( v \) is true for \( pv' \).
F. Query Building
The Query component is responsible for dynamically generating the appropriate query (Q) in a query language (QL) supported by the repository (e.g. SPARQL[7]) in order to execute it against the repository and, therefore, to retrieve the results (i.e. the answer) for the complex question formulated by the user.
**Definition 14 (Query)** – The task of constructing a query is a function \( \text{query}_{\text{QL}}(G, \Lambda, Z) \rightarrow Q_{\text{QL}} \) such that:
- \( G \) is a connected graph composed by the ontology entities selected by the user;
- \( \Lambda \) is the set of constraints specified by the user;
- \( Z \) is a set of ordering clauses. Each element \( z \in Z \) is a tuple \( z = (e, \text{asc}) \) such that \( e \in G \) is an ontological entity and \( \text{asc} \in \{ \text{true}, \text{false} \} \) stating an ascending (true) or a descending (false) order;
- \( Q_{\text{QL}} \) is the resulting query formulated in the query language QL and is ready to be executed against the repository.
IV. WALK-THROUGH EXAMPLE
To demonstrate our approach and how the aforementioned components might be exploited together by an application assisting the user to accomplish the task of asking complex questions, we present now a walk-through example. In this example, the user makes use of an application whose underlying repository is the DBPedia (i.e. $\mathcal{O}_{DBPedia}$) to find an appropriate answer to the previously mentioned question: “All soccer players, who played as goalkeeper for a club that has a stadium with more than 40,000 seats and who are born in a country with more than 10 million inhabitants”.
Considering the question in hands, let us assume the user starts searching using the text “soccer player”. Thus, the Object Mapping component is required to map such text to some ontology entities. The result of the first step is as follows:
$$nlp(\text{“soccer player”}) \rightarrow T_1: T_1 = \{\text{soccer}, \text{player}\}$$
Hence, in the second step two terms (soccer and player) must be mapped to ontological entities. Since there is no previous context for these terms, it is assumed that the context is all the ontology. A possible result of function $map(T_1, \mathcal{O}_{DBPedia}, \mathcal{O}_{DBPedia})$ is depicted in TABLE I where the confidence value of each pair term-entity is omitted by clarity reasons.
<table>
<thead>
<tr>
<th>Term</th>
<th>List of Entities</th>
</tr>
</thead>
<tbody>
<tr>
<td>soccer</td>
<td>[SoccerManager, SoccerClub, SoccerPlayer, SoccerLeague]</td>
</tr>
<tr>
<td>player</td>
<td>[RugbyPlayer, GolfPlayer, SnookerPlayer, ChessPlayer,</td>
</tr>
<tr>
<td></td>
<td>AmericanFootballPlayer, SoccerPlayer, BaseballPlayer,</td>
</tr>
<tr>
<td></td>
<td>VolleyballPlayer, BadmintonPlayer, playerInTeam,</td>
</tr>
<tr>
<td></td>
<td>GaelicGamesPlayer, GridironFootballPlayer, PokerPlayer,</td>
</tr>
<tr>
<td></td>
<td>IceHockeyPlayer, TennisPlayer, CanadianFootballPlayer,</td>
</tr>
<tr>
<td></td>
<td>BasketballPlayer]</td>
</tr>
</tbody>
</table>
In the disambiguation step, the system may strongly suggest the concept “SoccerPlayer” because it is the one common to both terms. However, the user may be able to select one of the other possible entities. Regarding the question in hands, consider that the user confirms the system suggestion: “SoccerPlayer”. This entity becomes the current context.
Next, the system may contextualize the “SoccerPlayer” entity. For that, the system can extract a module of the ontology focused on this entity such that $\sigma(\mathcal{O}_{DBPedia}, \{\text{SoccerPlayer}\}) \rightarrow M_1$. Yet, considering that the resulting module ($M_1$) contains several entities that cannot be all (easily) represented in the GUI, the system may opt by summarize it such that $\sigma(M_1) \rightarrow D_1$. In that sense, let us admit that among other entities $D_1$ contains the concepts “SoccerClub”, “Person” and “PopulatedPlace” and none relationship between these concepts and “SoccerPlayer”.
Afterward, as a second interaction, the user could request the system to find relationships between “SoccerPlayer” and “SoccerClub”. The result of executing $relSearch(\text{SoccerPlayer}, \text{SoccerClub}, \mathcal{O}_{DBPedia}, M_1, 1, \text{forward})$ is $Rel_{S_1}$ which is partially depicted in TABLE II.
<table>
<thead>
<tr>
<th>$r$</th>
<th>$\nu$</th>
</tr>
</thead>
<tbody>
<tr>
<td>{SoccerPlayer, team, SoccerClub}</td>
<td>270061</td>
</tr>
<tr>
<td>{SoccerPlayer, clubs, SoccerClub}</td>
<td>243588</td>
</tr>
<tr>
<td>{SoccerPlayer, currentclub, SoccerClub}</td>
<td>36463</td>
</tr>
<tr>
<td>{SoccerPlayer, youthclubs, SoccerClub}</td>
<td>26396</td>
</tr>
<tr>
<td>{SoccerPlayer, title, SoccerClub}</td>
<td>108</td>
</tr>
</tbody>
</table>
It worth to notice that the property “clubs” required by the solution proposed in the DBPedia (cf. Fig. 2) is identified and suggested to the user. Since the user does not know that, let us admit s/he selects the property “team” instead “clubs”.
Further, the user may repeat the process of finding relationships for the concepts “SoccerPlayer” and “PopulatedPlace”. The consequent result is partial depicted in TABLE III.
<table>
<thead>
<tr>
<th>$r$</th>
<th>$\nu$</th>
</tr>
</thead>
<tbody>
<tr>
<td>{SoccerPlayer, birthPlace, PopulatedPlace}</td>
<td>104931</td>
</tr>
<tr>
<td>{SoccerPlayer, placeOfBirth, PopulatedPlace}</td>
<td>98410</td>
</tr>
<tr>
<td>{SoccerPlayer, countryOfBirth, PopulatedPlace}</td>
<td>35649</td>
</tr>
<tr>
<td>{SoccerPlayer, cityOfBirth, PopulatedPlace}</td>
<td>30979</td>
</tr>
<tr>
<td>{SoccerPlayer, deathPlace, PopulatedPlace}</td>
<td>5010</td>
</tr>
</tbody>
</table>
In this case, consider that the user selects the property “countryOfBirth” because it is more meaningful than the others by the fact of containing the word “country”, which is used in the question in hands. The specification made by the user after the just described three interactions is graphically depicted in Fig. 3.

Figure 3. The specification made by the user after three interactions.
Along with the question specification, the system might provide the user with some results. Hence, consider $G_1$ as the connected graph represented in Fig. 3 and $\Lambda$ and $\mathcal{Z}$ as empty sets. Therefore, the output of $query_{SPARQL}(G_1, \emptyset, \emptyset)$ is the query represented in Fig. 4.
```sparql
SELECT ?player, ?club, ?place
{
?place a <http://dbpedia.org/ontology/PopulatedPlace> .
}
```

Figure 4. The query SPARQL resulting from $G_1$.
By executing this query the system would return approximately 34523 results. In face of the huge amount of results returned, the user decides to start filtering them. For
that, the user may request a search for “goalkeeper” in the context of the entity “SoccerPlayer”. Again, the Object Mapping component is exploited. As a result, the system would perceive that the term “goalkeeper” is mostly found as a (part of a) value of the data property “position”. By receiving this suggestion, the user may select such property and request a new constraint for it. As that, the system runs sequentially two processes of the Constraints Specification component: (i) the Applicable Operators and (ii) the Property Common Values. A set of possible results of these processes are present in TABLE IV and TABLE V respectively.
**TABLE IV.** **THE APPLICABLE OPERATORS TO THE PROPERTY “POSITION”**
<table>
<thead>
<tr>
<th>Property</th>
<th>Range</th>
<th>Applicable Operators</th>
</tr>
</thead>
<tbody>
<tr>
<td>position</td>
<td>string</td>
<td>{=, !=, <, >, ≤, ≥, contains, starts, ends}</td>
</tr>
</tbody>
</table>
**TABLE V.** **THE FIVE MOST COMMON VALUES OF PROPERTY “POSITION”**
<table>
<thead>
<tr>
<th>Property (p)</th>
<th>Property Value (pv)</th>
<th>pv</th>
</tr>
</thead>
<tbody>
<tr>
<td>position</td>
<td>Midfielder</td>
<td>14135</td>
</tr>
<tr>
<td></td>
<td>Defender</td>
<td>11157</td>
</tr>
<tr>
<td></td>
<td>Striker</td>
<td>8179</td>
</tr>
<tr>
<td></td>
<td>Goalkeeper</td>
<td>5889</td>
</tr>
<tr>
<td></td>
<td>Forward</td>
<td>4658</td>
</tr>
</tbody>
</table>
By inspecting the most common values the “position” property, the user may request to create a constraint such that constraint ("position", =, "goalkeeper"). As a result, a new query is executed against the repository, returning approximately 3178 results.
For the required restriction on concept “PopulatedPlace” the user may adopt a similar process to the one described for “goalkeeper”. By doing that using terms such as “inhabitants” or “habitants” the user would not be able to find any relevant and suitable property. This fact leads the user to search alternative terms such as synonyms.1 By using the synonym “population”, properties such as (i) “populationTotal”, (ii) “totalPopulation” and (iii) “populationEstimate” will be suggested. User attempts with the two first properties would have as outcome an empty set of results. Through the third attempt (i.e. “populationEstimate”) the set of results would have approximately 1462 elements.
To restrict the results to those players related to clubs whose stadiums have more than 40,000 seats, the user may search for the term “stadium” which will be disambiguated to the ontological concept “Stadium”. Further, the user requests the system to find relationships between “SoccerClub” and “Stadium”. The most relevant property between these entities is “ground”. Further, searching for the term “seats” in the context of “Stadium” the system will suggest the property “seatingCapacity” as one of the most relevant. Consequently, the user specifies the respective constraint. The entirely user specification is graphically depicted in Fig. 5.
By executing the query resulting from this specification, the user obtains approximately 406 results that definitely answer the question in hands.
**V. IMPLEMENTATION**
This section describes our implementation efforts of the proposed ideas in the context of the World Search project.
**A. World Search Architecture**
The World Search system architecture is illustrated in Fig. 6.
This architecture has six main modules with the following responsibilities:
- GUI is responsible for the interaction with the user: (i) it enables the user to formulate queries and (ii) presents the respective responses (i.e. the retrieved results);
- Query Dispatcher is responsible for forwarding the queries to the proper answering module;
- Ontological Services module is responsible for managing the semantic information of the system (e.g. maintain the ontologies underlying the system) and providing semantic services (e.g. correspondences between concepts, synonyms) to the other system’ modules;
- Syntactic Query is responsible for retrieving resources based on a text-based search only. It might make use of the ontological services to expand the query based on the synonyms of the words specified in the query;
---
1 The synonyms must also be suggested/requested by/to the system.
• Semantic Query is responsible for retrieving resources based on the semantic entities specified by the user. The ontological services are exploited in order to perceive the relations between those entities and other ontological entities;
• Repository is where all the data/information supporting both the syntactic and the semantic queries is maintained. Currently, this module is mainly composed by (i) FAST technology [13] for indexing purposes and (ii) a data source meeting the Linking Open Data principles [14].
B. Developed Prototype
The components of our proposal integrate into the World Search architecture as illustrated in Fig. 7.
2) Object Mapping
The suggestions provided by the Object Mapping component are retrieved by an indexing mechanism (FAST). This mechanism indexes the content of a configurable set of selected properties determined by the entity type. Current configuration is represented in TABLE VI. Each selected property is associated to a weight-value, which is used for ranking purposes.
<table>
<thead>
<tr>
<th>Entity Type</th>
<th>Dimensions</th>
</tr>
</thead>
<tbody>
<tr>
<td>F-box</td>
<td></td>
</tr>
<tr>
<td>Property</td>
<td>Weight</td>
</tr>
<tr>
<td>Class</td>
<td>3</td>
</tr>
<tr>
<td>rdf:label</td>
<td>3</td>
</tr>
<tr>
<td>rdf:comment</td>
<td>1</td>
</tr>
<tr>
<td>Object Property</td>
<td>rdf:label</td>
</tr>
<tr>
<td>rdf:comment</td>
<td>1</td>
</tr>
<tr>
<td>Data Property</td>
<td>rdf:label</td>
</tr>
<tr>
<td>rdf:comment</td>
<td>1</td>
</tr>
<tr>
<td>Individuals</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>3</td>
</tr>
</tbody>
</table>
Moreover, the indexing mechanism makes use of text analyzer which is responsible for a set of NLP tasks on the text content to be indexed. The same analyzer is also applied on the text introduced by the user in order to improve the suggestions made.
3) Relationship Searcher
The Relationship Searcher component implementation relies on a set of a pre-defined set of SPARQL queries against the repository. The value expressing the relevance of each relationship path found is given by counting the number of times that relationship is instantiated on the context of the two input ontology entities. Further, the set of relationships found is pruned based on the input context.
4) Constraints Specification
The Constraints Specification component makes use of an interpretation function mapping the range of a given property to a set of logical operators supported by the SPARQL endpoint of the repository. This interpretation function is able to dynamically exploit the XML Built-in Datatype hierarchy [18] to infer the best possible mapping. For non-recognized ranges, the function exploits a static list of mappings that can be seeded by users with especial rights.
The most common values of a given property are determined through a set of parameterized SPARQL queries against the repository. These queries take into consideration the intended context on which the property is being used.
5) Query Building
The Query Building component is implemented following a straightforward translation approach of a connected graph to a text representing a query in SPARQL.
Finally, it is worth mentioning that the components integrated into the Ontological Services module are all stateless. Therefore, it is responsibility of the Semantic Query module to provide an application status. This status is ² "rdf" stands for the RDF Schema vocabulary whose namespace is http://www.w3.org/1999/02/22-rdf-syntax-ns#
used namely to keep tracking the ontology entities the user is interesting in and the current context.
VI. DISCUSSION
This section provides a discussion upon the proposed contributions both in comparison with the related work and introspectively.
A. Related Work
In [4] the authors present the RelFinder tool based on the ORVI four step process: Object Mapping, Relationship Search, Visualization, and Interactive Exploration. This tool comprehends the Object Mapping and Relationship Searcher components only. Unlike our proposal, the retrieved results do not take into consideration the context. Further, while this tool supports the user in the process of discovering relationships between ontological entities, our approach combines the two components supporting the user building the complex queries.
In [3] the authors propose a method and a tool that allows/supports users to access and explore Semantic Web using faceted search, narrowing down large sets of data. The underlying “graph provides a coherent representation” of the (search) facets thus enhancing the relationships between such facets. The authors propose a three-stage model consisting of (1) Goal formation, where the user typically specifies one or more concepts to determine the initial search space, then (2) the user incrementally constructs the search space by exploiting the relationships between the starting concepts and others linked to them, and finally (3) the Multi-perspective exploration and sense-making stage where the user explores the space and “make sense of the information and relations contained in it”. The three-stage model has been implemented in gFacet [5] tool, that visually represents the facets in a graph. However, this tool does not allow the intersection of multiple filters (e.g. “soccer players born in a country with more than 10 million inhabitants”). Moreover, faceted search has some limitations when dealing with vast number of entities. Our approach deals with this issue through contextualization.
B. Conclusion and Future Work
The most notable difference of our proposal to related work is the emphasis on focusing on the relevant search space (the context) and, therefore, discarding irrelevant information. According to this understanding, we proposed the adoption and integration of modularization techniques into tasks/functions such as object mapping, relationship searcher and constraints specification. As a result, these functions become not completely deterministic because they rely on the concept of “context” to scale down their outcome. In this respect, preliminary experiments highlighted the need to considerably improve state-of-the-art modularization algorithms. This will deserve our future attention. On the other hand, the same experiments also reveal that users benefit from the contextualization process because it permits the system to make more accurate suggestions and reduces text-based searches made by the user.
Similar to the related work approaches, our proposal also emphasizes the user role in the search, browsing and navigation process. In this respect, it is necessary to emphasize that users with the same expertise may achieve different solutions for the same complex question. This is easily perceived by comparing the solution achieved in the walk-through example with the proposed DBPedia solution. Even though, some solutions might be better than others.
Based on the feedback received from users that are already using the developed prototype, we are planning to evolve the prototype with a component responsible for providing some statistical information that might help the user on their decisions and recommendations based on the users’ past experiences.
ACKNOWLEDGMENT
This work is partially supported by the Portuguese projects: World Search (QREN11495) and OOBIAN (QREN 12677), both funded by FEDER through the COMPETE program for operational factors of competitiveness.
REFERENCES
|
{"Source-Url": "http://www.dei.isep.ipp.pt/~pmaio/pubs/2012/WI_2012.pdf", "len_cl100k_base": 8844, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 32996, "total-output-tokens": 10118, "length": "2e13", "weborganizer": {"__label__adult": 0.0003938674926757813, "__label__art_design": 0.0010023117065429688, "__label__crime_law": 0.0007014274597167969, "__label__education_jobs": 0.007781982421875, "__label__entertainment": 0.00020623207092285156, "__label__fashion_beauty": 0.0002987384796142578, "__label__finance_business": 0.0007753372192382812, "__label__food_dining": 0.0005273818969726562, "__label__games": 0.0010023117065429688, "__label__hardware": 0.0008721351623535156, "__label__health": 0.0020809173583984375, "__label__history": 0.0007829666137695312, "__label__home_hobbies": 0.00018775463104248047, "__label__industrial": 0.0005574226379394531, "__label__literature": 0.001194000244140625, "__label__politics": 0.0004498958587646485, "__label__religion": 0.0007719993591308594, "__label__science_tech": 0.4150390625, "__label__social_life": 0.00029587745666503906, "__label__software": 0.091552734375, "__label__software_dev": 0.47216796875, "__label__sports_fitness": 0.00031065940856933594, "__label__transportation": 0.0006399154663085938, "__label__travel": 0.00035381317138671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40630, 0.01474]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40630, 0.51468]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40630, 0.88804]], "google_gemma-3-12b-it_contains_pii": [[0, 4292, false], [4292, 9420, null], [9420, 14943, null], [14943, 20567, null], [20567, 26465, null], [26465, 30652, null], [30652, 34097, null], [34097, 40630, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4292, true], [4292, 9420, null], [9420, 14943, null], [14943, 20567, null], [20567, 26465, null], [26465, 30652, null], [30652, 34097, null], [34097, 40630, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40630, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40630, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40630, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40630, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40630, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40630, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40630, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40630, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40630, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40630, null]], "pdf_page_numbers": [[0, 4292, 1], [4292, 9420, 2], [9420, 14943, 3], [14943, 20567, 4], [20567, 26465, 5], [26465, 30652, 6], [30652, 34097, 7], [34097, 40630, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40630, 0.19087]]}
|
olmocr_science_pdfs
|
2024-12-05
|
2024-12-05
|
53a32a2bbbe101c98bb593e6f3f647b7a0f1be84
|
AN IMPLEMENTATION OF SHARED MEMORY
FOR UNIX WITH REAL-TIME SYNCHRONIZATION
by
Paul K. Harter, Jr. and Gregory R. Bollendonk
CU-CS-310-85 September, 1985
University of Colorado, Department of Computer Science,
Boulder, Colorado.
An Implementation of Shared Memory for Unix with Real-Time Synchronization
Paul K. Harter, Jr.
Gregory R. Bollendonk
1. Introduction
The Distributed Computing Support system (DCS) [Harter 85b] is designed to provide both high-level language and operating systems support for the programmer building parallel, distributed applications. The above reference specifies the DCS user interface and describes the design of the DCS system. In particular, it discusses the abstraction of a distributed shared variable, and the required kernel support for shared memory. This paper describes the kernel implementation of shared memory upon which distributed shared memory rests.
Although the actual design and coding of the shared memory extension was done in the context of the DCS system, it had its beginnings as a class project for an advanced course in operating systems in the Fall Semester 1984. The purpose of the course was to provide students the opportunity to make a detailed study of the implementation of a "real" operating system, Unix. The term project for the course was to specify a useful extension to Unix and to suggest a possible implementation. One of the groups suggested the addition of shared memory. Their project provided the initial framework for our final design and implementation.
During the Spring Semester 1985, the Distributed Computing Support system was conceived and designed. Much of the low-level design and implementation of DCS was carried out as the class project for a graduate seminar in networking and distributed computing being taught by the first author.
The DCS system was to support distributed computations via asynchronous remote procedure calls and distributed shared variables. Distributed shared variables were to allow multiple processes, cooperating in a given computation, to share "global" variables, with access to these variables to be independent of process location. Sharing by processes executing on separate nodes would have to be based on a message protocol. While processes executing on the same node could use the same protocols, it was clear that they would be able to share these variables directly at much lower cost through shared memory. Thus the design of DCS provided the impetus for extending the original shared memory proposal and carrying through with its implementation.
The implementation was originally conceived for a DEC VAX 11/780 running Unix 4.2BSD and first brought up under SUN Unix version 1.3 running on SUN workstations. We have recently added our shared memory implementation to the newest version of the operating system (SUN Unix version 2.0) with only minor modification. In fact, the installation took less than one man-day.
The remainder of this report is organized as follows. Section 2 discusses the background of interprocess communication (IPC) in Unix and the requirements introduced by the DCS system. Section 3 describes the shared memory facility provided by our extension, while Section 4 discusses the kernel implementation. Section 5 discusses performance issues, and Section 6 summarizes our results and conclusions.
2. Background
The original design of Unix [Ritchie 74, Ritchie 78] contained very little support for communication between processes. There was no shared memory, no general message facility, and no general mechanism for interprocess synchronization. There were two IPC channels available, pipes and signals.
A pipe provides a one-way, FIFO, byte stream between a pair of related processes accessed via the standard Unix I/O calls. Processes are delayed upon trying to read from an empty pipe or write to a full one. The advantage to pipes is that they are well tailored to a type of processing very common to Unix systems. Signals provide a simple form of software interrupt facility. A signal is sent from one process to another and typically results in the asynchronous invocation of a handler routine in the receiving process. Signals are useful for fairly simple, pre-arranged synchronization between processes.
The greatest disadvantage to the two mechanisms described above is their lack of generality. In particular, both are limited to use between related processes. For pipes, this relation is kinship via fork's from the same parent, while for signals it is determined by association with the same terminal. This precludes, for example, the dynamic creation of pipes between existing processes desiring to communicate. In the case of signals, the number and meanings of signals are fixed and determined by the system, and a signals can carry no information. Thus, a signal may not even carry the identity of the sender, which places a very strict limit on flexibility.
Arbitrary process pairs may communicate via the file system by opening the same file, but this has two drawbacks. First, speed is limited by the file system speed, even for small messages, and second, without a general synchronization mechanism, coordination of access to the shared file is difficult. Some Unix programs use the existence or non-existence of a known file to simulate a binary semaphore, but this is clearly sub-optimal.
2.1. IPC in Berkeley Unix 4.2
Later versions of Unix have contained extensions to allow more general forms of IPC. ATT extended Unix to allow for shared memory segments, first in a non-paged and then in a paged environment. In addition to shared memory segments, ATT Unix has semaphores and message queues. All three are implemented as Unix objects similar to files and subject to standard Unix protection checking on access. While our implementation is completely independent of the file system, it has some features in common with ATT's System V.
The berkeley unix group took a different approach and added a completely general IPC facility, which included support for networking and internetworking. This system provides a new abstraction called a socket, which is a port for sending and receiving messages. Sockets are handled within a program much as file descriptors have been. In fact, the standard system read and write calls work transparently on some classes of sockets. Differentiating sockets and file descriptors are network addresses and additional system calls to associate addresses with sockets (binding) and to associate sockets in different processes with each other (connection establishment). As an alternative to the (semi-permanent) association of two sockets implied by a connection, individual messages may be addressed to specific destinations. A last and important distinction between files an sockets is that sockets are not sharable objects and thus have no need for sets of permissions such as files have. Any process can use its socket to send a message to any address.
2.2. DCS Requirements
With the ever increasing availability of powerful workstations, there has been a rapid increase in the number of computing environments made up of personal workstations connected via a local area network. Since each workstation is dedicated to an individual who is likely to spend most of his time typing (eg. word processing, program editing), the workstations will be grossly under-utilized most of the time. The DCS system is an attempt to harness this surplus power to do useful computing.
The concurrent availability of a number of under-utilized workstations opens up many new possibilities if the physical parallelism inherent in separate CPU's can be exploited. Due to the relative high cost of communication between machines [Popek 81, Peterson 79] when compared to memory access and instruction execution, use of a local area network for distributed parallel computation must be restricted to algorithms whose demand for cycles greatly exceed their need for interprocess communication. Schnabel [Schnabel 85] is investigating a class of numerical algorithms that appear ideally suited for this environment.
As mentioned in the Introduction, the Distributed Computing Support (DCS) system supplies language support for distributed computing via a collection of systems routines. The user abstractions provided by DCS are the
asynchronous remote procedure call and distributed shared variable. While these are described in detail elsewhere [Harter 85b], a brief description will be given here to motivate the implementation of shared memory.
An asynchronous remote procedure call is like a normal procedure call in that input and output arguments are passed and returned. It may or may not actually execute remotely, so the programmer may make no assumptions as to the execution site, although we plan to investigate the feasibility of allowing some programmer input on location in the future. The difference is that rather than the normal procedure call synchronization, wherein the caller is suspended during execution of the procedure, in the asynchronous call the caller continues to execute. The caller may then either merely continue to compute or make more asynchronous calls. Resynchronization of the caller and the callee occurs at the callers discretion, when he may request that he be suspended pending completion of outstanding calls.
In the case of the synchronous procedure call, the called procedure may access global data declared in some surrounding scope. The utility of this type of access prompted us to attempt to provide a similar type of access in the asynchronous case. The programmers view of a distributed shared variable is that of a variable that may be read, written, or updated atomically with respect to other processes, where an update has the form:
\[
\text{var} := \text{user\_function} (\text{var}, \text{arg}),
\]
with the constraint that \text{arg} may not contain a direct reference to another shared variable. Thus, although the programmer must be aware of the fact that the variable may change between subsequent reads by a single process, the variable is always in a consistent state, independent of size or the locations of sharing processes.
The user of the DCS abstractions need not be concerned with networks, addresses, ports (or sockets), or the extra system interfaces for sending and receiving messages. However their implementation placed a number of requirements on its host systems. First, it required a general message passing facility to allow for the distribution of processes, arguments and results, and for the sharing of data in a distributed environment. All of our machines currently run Berkeley Unix 4.2 and the IPC implementation there could meet our needs for message passing in the network.
The implementation of distributed shared variables posed additional problems. The distributed shared variables associated with a computation must be equally accessible to all processes of that computation independent of their physical site of execution. This desire for equal, efficient access excluded excluded an implementation where processes obtained and assigned values by sending messages to a special storage or caretaker node. Thus, a copy of each variable is kept at each node. Further, the various copies of a shared variable must be kept consistent across nodes. This is ensured in DCS by a collection of manager
processes (Distributed Shared Memory Processes, DSMP's), one on each node. The DSMP's on the various nodes communicate update information to each other and guarantee adherence to the update discipline specified for each variable. Thus, the DSMP must also be able to access the shared data.
Again, our desire for efficiency caused us to reject a solution, where the DSMP "owned" the copy of the shared data on each node and responded to requests by the various processes on that node. Particularly in the case of multiple concurrent computations or multiple processes of a single computation on one node, the DSMP could present a performance bottleneck to in the system. Thus, to avoid the cost of sending messages within a node and the bottleneck effect of having one process do so much work, it is necessary for processes to be able to access the same set of memory locations. This feature was not available in Berkeley Unix, so we decided to add it.
Finally, there are synchronization requirements. The design of the DCS interface requires that any access to variables must be atomic. That is, it must be possible to read or write a variable atomically. There is no problem with this in communication between nodes to keep variables up to date, since the update protocols have been designed to send and install entire variables at once. A more difficult problem arises within a node, where the DSMP and one or more user processes all share a single copy of a shared variable. Since these variables may be larger than the width of the memory data path, atomic access is not guaranteed by instruction atomicity for reads and writes, let alone updates. This is an instance of the standard mutual exclusion problem. In our case, however, there are real-time constraints imposed by the role of the DSMP, who must maintain many variables simultaneously.
In adding shared memory to Unix, we could also add semaphores [Dijkstra 08a] or some other mechanism for mutual exclusion, but the standard semantics of semaphores is inadequate. Since the DSMP must acquire exclusive access to install changes from other nodes in the system, it would have to do a P on a semaphore that could be held by a user process. Since the user process may fail or terminate while holding the semaphore, the DSMP would be stuck forever. Even barring the error case, a user process could hold the semaphore over a page fault or disk read. Since the DSMP may be serving many processes and many shared variables, this wasted time could lead to significant performance degradation. This led us to the design of a semaphore that could be used in the face of performance constraints. The semaphores we included (see Section 3 and the Appendix) provide a timeout period so that a P operation will return after the semaphore has been decremented or the timeout period has passed. The timeout can be specified as having zero (for polling), infinite or some finite length.
3. Supplied Features
In this section we describe the interface and features included in our implementation as motivated in the previous section. The Unix manual pages for the new system routines are contained in the Appendix. First we describe the interface for attaching to and using shared segments, and then the declaration and use of semaphores.
3.1. Shared Segments
Our implementation provides shared memory segments that are mapped indistinguishably into the memory space of processes. Ordinary reading and writing of variables in shared memory occur in the same way as access to variables in private data space, i.e. there are no special access functions or system overhead involved. Shared segments may exist anywhere within the address space of a process and the number of shared segments that may be accessed by a process at one time is limited only by system table space. Our implementation currently enforces a system-wide limit of 20 segments of up to 20 pages existing concurrently, however these limits may be changed simply by modifying manifest constants and recompiling the affected modules. The only restriction on the location of shared segments is that they begin on page boundaries and occupy an integral number of pages. Our implementation of shared data segments requires two new system calls, vshare, and vrlse. These new calls allow the user process to create, attach, and detach itself from shared segments within the system.
3.1.1. Creation and Rendezvous
In order to share memory among a set of processes, one of the processes must "create" a shared segment by declaring a piece of its address space sharable. This is done by informing the system that it wishes to share a segment of its address space via a call to the routine vshare giving the start address and size of the space to share, and a null segment identifier seg_id. The kernel returns a unique (within the machine) identifier which is used to refer to the shared segment in the future.
Other processes in the set may then attach to the segment just created via the same system call. A process wishing to attach to a shared segment must do so by obtaining the seg_id for the segment and "trading in" a piece of its address space for a previously declared shared segment. Thus, prior to sharing memory there must be some initial communication between the sharing processes to exchange the seg_id, for example via a pre-arranged socket port name or file name. Having obtained the seg_id, the segment is mapped via a call to vshare giving the seg_id of the shared segment and start address and size of the address space to be traded. When this call returns, both processes have the shared segment mapped into their address spaces, though possibly at different logical addresses.
While attached to a shared segment, a process may neither **fork** a new copy of itself, nor may it **exec** a new text image. Although it would be possible to design and implement a reasonable semantics for the result of a **fork**, it was not required for our purposes. Since the kernel code implementing the **fork** operation is rather complex with interfaces throughout the system, we chose not to support it. The **exec** call is also complicated, but in this case it is hard to imagine integrating the semantics of **exec** with those of shared memory. An **exec** system call overwrites the address space of the calling process with the text and data segments of the program being **exec**'ed, expanding the address space if necessary. This causes several problems. First, since the address space of the process is likely to change shape across the **exec**, it may well be that the shared segment would be overwritten by the code of the new program. This is not likely to be the desired effect, and avoiding it would require the programmer to worry about object code sizes and other details. Second, since all data space of the process calling **exec** is reinitialized, the process would not be able to "remember" where the shared segment began. Thus, it seems that there isn't a clearly correct way to implement this feature at all. Finally, if the effect of a **fork** or an **exec** in the presence of shared memory is desired, it can be obtained by releasing the segment, making the call, and reattaching to the segment afterwards.
A process attached to a shared segment may detach itself from that segment explicitly by calling the routine **vrlse** giving the **seg_id** of the segment it wishes to detach. On the other hand, a process may be detached implicitly. When a process terminates, it is detached from all shared segments to which it is attached before the system goes through the standard termination processing.
In either case, when a process is detached from a shared segment, there are two possibilities. If the process being detached is the only process attached to the shared segment, then the segment becomes unsharable and the calling process continues with the data in the segment unchanged. On the other hand, if there are several processes attached to the shared segment at the time of the call, then the system replaces the shared segment in the calling process with new zero-filled pages.
### 3.1.2. Ownership
Shared segments are shared equally, i.e. there is no "owner" of a shared segment. Although the shared segment is initially part of the private address space of the process that first declares it sharable, that process has no special rights to the segment afterward. A process may attach to the shared segment by knowing its size and **seg_id**. Though this is no protection from malicious processes, the likelihood of guessing both correctly by accident is small. Thus, there are no privilege classes or special access rights for the segment that must be checked prior to granting access to the requested segment.
The process that "creates" the segment may very well not be the last one to access it, and the lifetime of a shared segment is not limited by the lifetime of the creator. Processes may come and go, but the segment remains sharable until every process that attached the segment has either released it explicitly or terminated.
3.2. Semaphores
Our implementation of semaphores is a natural extension to our shared data segments, and semaphores are intended to coordinate access to the shared objects contained in these segments. A shared data segment may have a number of semaphores, each associated with a particular offset within the segment. Note: The semaphores are associated with offsets in segments, not located in the segments, so a user process may only access a semaphore via the kernel operations supplied. Thus, while use of semaphores to coordinate access is not required, they can not be overwritten by accident.
Semaphores provide an efficient resource control mechanism for a user process to synchronize access with any other process, with minimal kernel overhead. Our implementation of semaphores requires three new system calls: getsem, Psem, and Vsem, for the creation and use of semaphores. Again, the number of semaphores is limited only by system table space, which is easily modified.
3.2.1. Creation and Use
Since semaphores are associated with offsets within shared segments, a user process must be attached to a sharable segment in order to create or use a semaphore. A semaphore is created via a call to the kernel routine getsem, specifying a shared segment, an offset, and an initial value. The system returns a unique (within the machine) identifier sem_id (semaphore identifier), which is used to refer to the semaphore in the future. Subsequent calls to getsem specifying the same segment and offset location will return the same sem_id and have no effect on the semaphore value. Thus, two processes sharing a segment need not agree in advance on which is to create semaphores. If two processes attempt to create and then decrement the same semaphore simultaneously, then one process will create it successfully and exactly one process will successfully complete the P operation (assuming an initial value of 1). There are no guarantees as to the identity of either.
Once a process has the sem_id for an associated semaphore, it can operate on the semaphore with the new system calls Psem and Vsem, P and V respectively, although the semantics of our calls do not exactly match those of in their pure forms. First, in order to facilitate the use of semaphores for resource counting (number of shared buffer slots etc.), we have implemented the so-called PV-chunk operations [Van Tilburgh 79] to reduce the likelihood of deadlock. Thus, a semaphore may be incremented or decremented by an integer value.
specified as a parameter to the system call.
As mentioned in Section 2.2 on the requirements imposed by the DCS system, it must be possible to guarantee synchronization and mutual exclusion in a real-time environment. This implies that the standard semantics for the \texttt{P} operation is inadequate to our purposes, since it implies possible arbitrary delay of the caller. Therefore, our \texttt{Psem} system call allows the user process to specify the amount of time it wishes to wait while trying to decrement the value of the semaphore. A return code indicates that the semaphore was successfully decremented (0) or that the call timed out (-1).
Thus, a process calling the kernel routine \texttt{Psem} specifies not only the semaphore (sem_id) but a decrement value (decr) and timeout value (time) as well, and the semantics is:
$$\texttt{Psem}(\text{sem}_id, \text{decr}, \text{time}) \equiv \text{suspend caller until (sem}_id \geq \text{decr} \parallel \text{time elapsed)}$$
Then atomically execute:
$$\text{If (sem}_id \geq \text{decr) \{ }
\text{sem}_id \leftarrow \text{decr; return (0) }
\text{else return(-1)$$
There are two special case values for the timeout interval. A value of 0 specifies that \texttt{Psem} is to return immediately, even if the decrement cannot be performed. A value of -1 specifies that the call is not to return unless the semaphore is successfully decremented. Thus, the call \texttt{Psem}(sem_id, 1, -1) has the same semantics as Dijkstra's \texttt{P}(sem_id) operation.
The \texttt{Vsem} system call requires a \texttt{sem}_id and a semaphore increment value. If the semaphore value is incremented enough to allow one or more waiting processes to proceed, then waiting processes are awakened in FIFO order. It is possible for a process waiting in a \texttt{Psem} operation with a high decrement value to be overtaken by a process with a low decrement value in the case where the \texttt{Vsem} increment was not great enough to satisfy the affected process.
Once created, a semaphore remains in existence as long as the segment with which it is associated exists. When the shared segment has been \texttt{vrls}e'd by all processes attached to it, the segment becomes unsharable and all semaphores associated with the segment are deleted from the system.
The five new system calls described above (\texttt{vshare}, \texttt{vrls}, \texttt{getsem}, \texttt{Psem}, and \texttt{Vsem}) provide a general, efficient mechanism for data sharing and synchronization among any number of unrelated processes. Data access is indistinguishable from normal (private) access and no copying of information is required. The implementation of \texttt{Psem} and \texttt{Vsem} operations results in a synchronization mechanism involving very little kernel overhead, especially when compared to the use of "lock files." The efficiency of these operations will be further discussed in Section 5.
4. Implementation
The facilities introduced above have been implemented on the SUN workstation under the Sun Micro Systems version of Unix (for our purposes, a port of Berkeley Unix 4.2). The basic kernel environment for SUN Unix virtual memory includes a set of page tables for each process and a global, circularly linked list of page frame descriptors to implement a variation of the "clock" algorithm for memory management [Babaoglu 81]. Each page frame descriptor references the user page table entry for the page it contains. In addition, the SUN workstation has a separate memory mapping module, which translates the addresses generated by the CPU. The page tables of the currently executing process must be loaded from main memory into the memory map unit for translation to take place.
Our approach to shared memory was to implement the simplest scheme consistent with reasonable performance. Two processes sharing a segment have identical page table entries for the ranges of addresses corresponding to the shared segment. Semaphores exist only in the kernel, and may be referenced only in the Psem and Vsem calls via the sem_id's returned from the getsem system call.
Although the user interface for shared segments does not include the notion of a distinguished "owner process" for a shared segment, it is necessary to make such a distinction at the implementation level. Thus, in the following, we will assume that each shared segment has a current owner and zero or more subordinates. The owner is the process that first calls vshare to make a portion of its address space sharable a subordinate is any process that subsequently attaches to the segment. The (physical) page frames containing the (virtual) pages that are made sharable by the call will contain the shared segment throughout its existence. These frames are initially allocated to the owner process before it calls vshare, and continue to be allocated to that process afterwards. The reason is that the page frame descriptors for any allocated pages in the system must contain a reference to a user page table, and it seemed simplest to leave them allocated to the owner. Thus, the owner is the only process that has "physical memory" allocated for the shared segment. If the owner releases the segment, then ownership transfers to one of the subordinates (if any), and the frame descriptors are modified to point to its page tables.
We added two data structures to the kernel, a segment descriptor table (sd_map) and a semaphore descriptor list (sem_list). The user values seg_id and sem_id are indices into sd_map and sem_list respectively. Each sd_map entry contains the size and start address of the shared segment in the owner's address space, a pointer to the owner's process table entry, a list of processes sharing that segment, and finally the start index to a list of the semaphores associated with that segment (sem_list). Each sem_list entry contains the index of the next in the list, the associated offset within its segment, the semaphore's
current value and the first index in a list of processes waiting on that semaphore (wait_list), and a pointer back to the sd_map entry describing the segment with which the semaphore is associated. The wait_list is a simple linked list of delayed Psem operations, each indicating the delayed process and the value by which it desires to decrement the semaphore.
When vshare is called, it first screens the input parameter values for legality. If the seg_id parameter is "0", a new sd_map entry is created corresponding to the segment described by the call parameters, the calling process is marked unswappable, and the involved pages and page frames are locked in memory. If the seg_id parameter is non-0, then the size in the sd_map entry corresponding to the passed seg_id is compared to the passed size. If the sizes match, then the caller's pages are returned to the system and the page table entries from the segment's owner are copied into the caller's page table.
Our decision to lock shared segments into memory can be traced to a number of factors. First, we desired that our extension have minimal impact on existing kernel routines and data structures to make it easy to add to other versions of the system in the future. Further, considering the complexity involved when compared to the actual performance gains, the positive return on our effort would have been minimal. If shared segments were not locked in memory, it would be necessary to modify the paging mechanism to keep page tables in several processes consistent when a page is paged out. Reference or modified bits would have to be copied from the process making the reference to the owner process (this information is referenced from the frame descriptor). These modifications would require another data structure in the kernel and one or more new fields in the kernel's process table. A mechanism similar to that currently used for shared text segments would have to be used, with the added complication that shared text segments may reside at different addresses in different processes.
The decision to lock shared pages and make their processes unswappable does not incur great performance penalties. First, processes are swapped in their entirety relatively infrequently in normal system execution, so making a process sharing memory unswappable does not involve a large cost. Second, shared memory segments involve pages being referenced by several processes, which are referenced comparatively frequently. By the nature of the clock paging algorithm used in the kernel, these frequently referenced pages would probably remain in core anyway. Thus, locking them in memory is unlikely to change their core residence patterns. To insure that this is the case, limits are set on the total number of pages that may be involved in shared segments and hence locked into core.
The vrlse call takes as its parameter the seg_id of a shared segment to which the caller is attached. After verifying that this is the case, one of three actions is taken. If the caller is a subordinate for the segment, then the callers page tables are modified so as not to refer to the shared segment and are
marked "fill on demand." This means that pages will be allocated to the process as those addresses are accessed. If the caller is the owner and there are existing subordinates, then the page frame descriptors are modified to point to the page tables of one of the subordinates, who thus becomes the new owner. The caller then gets "fill on demand" page table entries to replace the ones that referred to the shared segment. Finally, if the caller is the last process attached to the segment, the sd_map entry is deleted and the pages of the segment are unlocked from core. In all cases, if the process has just released its last shared segment, it is made swapable again.
When a process calls getsem with a seg_id, offset and initial value, it is first verified that the process is attached to the segment in question, that the offset is legal, and that the initial value is non-negative. If so, if there is not already a semaphore declared for that location, a new sem_list entry is allocated and initialized, and appended to the sem_list for the segment whose seg_id was passed. If the semaphore already exists, then its sem_list entry is located but not re-initialized. In both cases, the index (sem_id) of the entry is returned to the caller.
Psem first checks to see whether the caller is actually attached to the segment associated with the sem_id passed and whether the decrement value is positive. If so, it checks the value of the semaphore to determine whether a decrement is possible, i.e. whether the result would be non-negative. If so, the value is decremented and 0 is returned to indicate success. If not, the handling depends on the timeout value passed. If zero, the call returns immediately with -1 to indicate failure to decrement. If the value is -1, then a wait_list entry is allocated and appended to the current wait_list. Then the process calls the kernel sleep routine from within Psem using the address of the wait_list entry, and specifying a priority (PZERO) to prevent having to handle signals. Finally, for any positive time value, the caller allocates the wait_list entry as above, but before going to sleep, calls the kernel timeout routine saying that it is to be awakened in any case if the timeout period is exceeded. Then, when the process is awakened, it must determine whether it was awakened due to a timeout or as the result of a Vsem call. In the former case, Psem returns with an error code indicating a timeout has occurred. In the latter case, the routine untimeout will be called to cancel the previous request for a wakeup, and the call will return successfully after decrementing the semaphore.
Last, the Vsem call also checks for legality exactly as does the Psem routine. It then increments the value of the semaphore and scans the wait_list from front to back looking for processes who can now safely decrement the semaphore. For each such process, the address of the wait_list entry is passed to the kernel wakeup routine which makes the process executable again. When it has a chance to run, it will decrement the semaphore and return from the Psem call. A call to Vsem with legal parameters always returns successfully and never causes the caller to be delayed.
Four other kernel modules were modified to allow for shared memory. The modules for **fork** (kern_fork.c) and **exec** (kern_exec.c) were modified to test whether the caller is attached to shared memory and disallow the call if it is. The module handling process termination (kern_exit.c) was modified to test the terminating process for the existence of shared memory and to **vrlse** any shared segments. Finally, the system initialization routine (init_main.c) was modified to cause initialization of the shared memory data structures.
5. Performance
Shared memory provides a large speed improvement over the use of sockets for sharing of information among processes on the same machine. In this section, we give the results of some timing measurements made with our implementation.
First, the system calls to implement synchronized communication in shared segment are relatively efficient. The following table shows the times and number of instructions involved in a null system call, i.e. a system call with no kernel code executed beyond that for context switches, and our synchronization calls. These tests were run on an otherwise unloaded SUN 120 workstation. The parenthesized values in the instructions column give instruction counts normalized to the null system call.
<table>
<thead>
<tr>
<th>Call</th>
<th>Time for 10,000</th>
<th>Time for 1</th>
<th># of Instructions</th>
</tr>
</thead>
<tbody>
<tr>
<td>null</td>
<td>3.5 sec</td>
<td>350 usec</td>
<td>292 (0 -> base)</td>
</tr>
<tr>
<td>Vsem</td>
<td>4.1 sec</td>
<td>410 usec</td>
<td>342 (50)</td>
</tr>
<tr>
<td>Psem</td>
<td>4.3 sec</td>
<td>430 usec</td>
<td>358 (66)</td>
</tr>
<tr>
<td>getsem</td>
<td>5.5 sec</td>
<td>550 usec</td>
<td>458 (166)</td>
</tr>
</tbody>
</table>
As can be seen from the above results, the **Psem** and **Vsem** calls are very fast. Most of the cost is in the checking of parameters (e.g. verifying the existence of the semaphore and that the caller is attached to the segment with which it is associated), and the **timeout** and **untimeout** routines in the kernel, which are used for our implementation. The getsem call, which is executed only once for each semaphore, is somewhat more expensive, but still on the order of half the cost of the two context switches necessary to execute any system call.
Far more interesting than a simple listing of times for system calls is the data on a comparison of shared memory with sockets as a means of transferring data between processes. We set up two pairs of processes, one pair communicating via shared memory, the other via UNIX sockets. Each pair involved a reader process and a writer process. The writer was to transfer a series of blocks of various sizes to the reader process. Each size was transferred 100 times and the resulting times were then normalized to one transfer.
The shared memory implementation was essentially a one slot version of the well-known bounded buffer problem. The two processes had the following
Buffer sizes ranged from 128 bytes to 8192 bytes.
Similar code was set up for the two processes communicating via sockets.
Buffer sizes ranged from 128 bytes to a maximum of 2048 bytes.
We have no data for stream sockets with buffers longer than
2048, as the implementation would not permit us to send longer buffers.
Below is a comparison of asynchronous block data communication transfer
times between two processes using stream sockets versus shared memory with
semaphores. Each read or write data transfer must wait for an acknowledge
from the previous read or write operation.
<table>
<thead>
<tr>
<th>Buffer Size (bytes)</th>
<th>Stream Socket (milliseconds)</th>
<th>Shared Memory (milliseconds)</th>
</tr>
</thead>
<tbody>
<tr>
<td>128</td>
<td>19.5</td>
<td>3.3</td>
</tr>
<tr>
<td>256</td>
<td>24.6</td>
<td>3.6</td>
</tr>
<tr>
<td>512</td>
<td>38.0</td>
<td>3.8</td>
</tr>
<tr>
<td>1024</td>
<td>62.8</td>
<td>4.6</td>
</tr>
<tr>
<td>1536</td>
<td>93.2</td>
<td>5.5</td>
</tr>
<tr>
<td>2048</td>
<td>107.0</td>
<td>6.4</td>
</tr>
<tr>
<td>4096</td>
<td>no data</td>
<td>9.4</td>
</tr>
<tr>
<td>8192</td>
<td>no data</td>
<td>17.2</td>
</tr>
</tbody>
</table>
For this test, we used a simple data transfer, because it seemed the most
straightforward and involved the fewest assumptions about program usage pat-
terns. As a result, the timings are as favorable as possible to the socket imple-
mentation. One could easily imagine providing a pseudo-shared memory facility
based on sockets, with one process acting as a memory server. This server pro-
cess would handle all access to shared data. Thus, for a simple update, it
would be necessary for a process to request and receive data, process it and
create a new value, and finally send it back, all via sockets. For this case, the
times above for sockets would essentially double. The shared memory
implementation would be somewhat faster than above, since the update would involve a single \texttt{Psem}, a single \texttt{Vsem}, and the transfer of only enough data to make the update.
6. Conclusion
We have implemented shared memory in a version of Unix running on a SUN work station within the context of a general facility for distributed programming. The facility includes shared memory segments for data sharing and semaphores for synchronization. The semaphore implementation includes a time-out facility to make it useful for coordinating access to shared objects in an environment with real-time performance constraints. The implementation involved minimal change to the existing kernel and resulted in data sharing far more efficient and general than previously available under Unix.
7. Acknowledgements
We owe a debt of gratitude to many people for this work, Dennis Heimbigner taught the advanced systems course in which it began, and added his insight to the design of the semaphore facility. Bob Gray, Keith Cowley, and Grant Rose were part of the initial project group, and Mike Schweitzer contributed to the final implementation done primarily by the second author. Finally, the presentation was much improved by Jon Shultis and Evi Nemeth who waded through earlier versions.
References
Converting a Swap-Based System to do Paging in an Architecture Lacking Page-Referenced Bits.
Cooperating Sequential Processes.
DCS: A System for Distributed Computing Support.
University of Colorado, Computer Science TR #CU-CS-309-85
Notes on a workshop on distributed computing.
LOCUS: A Network Transparent, High Reliability, Distributed System.
The UNIX Time Sharing System.
*Communications of the ACM* 17 (7):365-375, (July 1974).
[Ritchie 78] D. M. Ritchie
The UNIX Time Sharing System: A Retrospective.
Parallel Computing in Optimization.
*Proceedings of the NATO Advanced Study Institute on Computational Mathematical Programming*, (Klaus Schittkowski, Editor), Springer-Verlag, 1985.
Also Available as University of Colorado Technical Report CU-CS-282-84.
On an Extension of Dijkstra's Semaphores Primitives.
Appendix
The following pages contain the Unix manual pages for our shared memory system calls.
NAME
vshare — create sharable data segment, or attach process to previously created sharable data segment.
SYNOPSIS
status = vshare(&seg_id, size, addr)
int seg_id, size, status
char *addr
DESCRIPTION
Vshare will make a data segment within the calling process globally sharable to any other process within the system, or replace a segment from the calling process with a segment made sharable by a previous call (by another process). Prior to a vshare call, the process must contain a segment that is equal in size to the desired global shared data segment. Size must have granularity of NBPG (number of bytes per page), and addr must be on a page boundary. Vshare has two types of operations:
To create a sharable segment, the process provides seg_id=0, a valid size, and a pointer to the data segment that is to be shared, addr. This segment must be within the calling process' data space prior to the vshare call. Upon successful return, seg_id will contain the unique segment identifier of the created segment. This segment identifier may be used by other processes to attach to the segment. If the call was unsuccessful, then seg_id will be meaningless and errno will contain an error return value.
To attach to a previously created sharable data segment, the process must provide the unique segment identifier to the shared data space, seg_id, and the size of shared data segment, size. The identifier must be obtained from the original creator of the sequence (see above). The attaching process must also provide the virtual address addr of an equivalent data segment that is within its address space. This data segment will be released to the free memory pool, and the corresponding page table entries will be changed to point at the desired shared data segment. If the return value is 0, then the segment was successfully attached, otherwise it failed. Failure can be due to an invalid combination of seg_id, size, and addr.
ERRORS
vshare has a zero return value unless there has been an error, in which case the global value errno will be set as follows.
[ESRCH] If seg_id not found (trying to attach to existing segment)
[ENOMEM] If the shared data table is full
[EINVAL] Bad size granularity, addr not in data space, or exceeds maximum size
[EALREADY] If process is already attached to specified segment
EXAMPLE
seg_id = 0; /* create new sharable segment */
size = 4096; /* must have page granularity */
addr = x[0]; /* must have page boundary */
status = vshare(&seg_id, size, addr);
SEE ALSO
vrlse(2), getsem(2), Psem(2), Vsem(2)
AUTHOR
Greg Bollendonk, Grant Rose, Michael Schweitzer, Paul Harter
DIAGNOSTICS
When \texttt{vshare} returns a non-zero value, the global variable \texttt{errno} contains one of the above error codes.
BUGS
maximum segment size depends on size of maximum memory.
NAME
vrlse — release a virtual shared data segment
SYNOPSIS
status = vrlse(seg_id)
int seg_id, status
DESCRIPTION
Vrlse causes the shared data segment previously allocated to a process to be released. The segment is identified by seg_id.
If the calling process is the last process referencing the segment, then it is removed from the system, any semaphores declared within the segment are removed, and the calling process keeps the segment. If other processes are still attached to the segment, then the released segment is replaced by zero-filled pages in the address space of the calling process. The page table entries of the current process (u.u_procp) are unmapped from the shared segment, and the process is removed from the shared data map structure in the kernel.
ERRORS
Vrlse has a zero return value unless there has been an error, in which case the global value errno will be set as follows.
[EINVAL] If seg_id is out of range
[ESRCH] If seg_id is invalid for this process
EXAMPLE
seg_id = 22; /* segment identifier returned from vshare */
status = vrlse(seg_id);
SEE ALSO
vshare(2), getsem(2), Psem(2), Vsem(2)
AUTHOR
Greg Bollendonk, Grant Rose, Michael Schweitzer, Paul Harter
DIAGNOSTICS
The global variable, errno, will be set if vrlse has a non-zero return code.
BUGS
None.
NAME
getsem — create a semaphore for a shared memory location
SYNOPSIS
sem_id = getsem(seg_id, offset, vinit)
int seg_id, offset, vinit, sem_id
DESCRIPTION
Getsem creates a semaphore associated with a given shared memory location referenced by the
seg_id, offset pair and returns the unique identifier associated with it by getsem. The identifier
sem_id must be used for all subsequent Psem and Vsem calls. The initial value vinit, is the ini-
tialized value of the semaphore. A call to getsem can have two results:
Getsem creates a new semaphore for the given seg_id, offset pair, assigns an initial value
vinit, to it, and returns the identifier sem_id.
Getsem is called with the same seg_id, offset pair used in a previous call to create a
semaphore. In this case getsem returns the sem_id associated with that semaphore and
ignores the initial value passed.
The semaphore will be deleted when the segment whose seg_id was passed to create the sema-
phore is deleted. This will occur when the last process attached to the segment performs a vrise
on that segment.
ERRORS
Getsem will return a positive integer sem_id unless there has been an error, in which case the
global value errno will be set as follows.
[ESRCH] If seg_id is not currently in the system
[EBADF] If seg_id has invalid range
[ENOMEM] If kernel semaphore free-list is empty
[EFAULT] If offset is not in seg_id
EINVAL] If vinit < 0
EXAMPLE
seg_id = 5; /* segment identifier returned from vshare */
offset = 320;
vinit = 1;
sem_id = getsem(seg_id, offset, vinit);
SEE ALSO
Psem(2), Vsem(2), vshare(2), vrise(2)
AUTHOR
Greg Bollendonk, Grant Rose, Michael Schweitzer, Paul Harter
DIAGNOSTICS
The global variable, errno, will be set if getsem has a zero or negative return code.
BUGS
None.
NAME
Psem — atomically decrement the value of a semaphore
SYNOPSIS
status = Psem(sem_id, value, tov)
int sem_id, value, tov, status
DESCRIPTION
Psem tries to decrement the value of semaphore sem_id by value. Sem_id is the unique semaphore identifier returned from getsem and value is a positive integer. Psem has the following results:
If the semaphore has a value greater than or equal to value, then then semaphore is decremented by value and Psem returns immediately with a return value of zero.
If the semaphore has a value less than value, then the calling process is suspended for a maximum time-out period of tov. The time-out value is tov/Hz seconds. Suspended processes are put on a FIFO queue for that semaphore. The value of tov has three possible ranges:
(1) equal to -1, causing 'wait-forever',
(2) equal to 0, causing immediate return,
(3) or a positive integer, wait until time-out has expired.
If the value of the semaphore is raised high enough to decrement it by value, prior to the end of the time-out period, then Psem returns with a return value of zero.
Otherwise, Psem will return a non-zero value if it fails to decrement the value of the semaphore within the specified time-out period.
ERRORS
Psem has a zero return value unless there has been an error, in which case the global value errno will be set as follows.
[ETIMEDOUT] If it fails to decrement semaphore sem_id, within time-out period, tov.
[EBUSY] If the semaphore could not be decremented when tov = 0.
[EINVAL] If value <= 0.
[ENOMEM] If the kernel free-list for waiting processes is empty.
[EFAULT] If sem_id is invalid
EXAMPLE
sem_id = 622; /* semaphore identifier returned from getsem */
value = 1;
tov = 100;
status = Psem(sem_id, value, tov);
SEE ALSO
gtsem(2), Vsem(2), vshare(2), vrlse(2)
AUTHOR
Greg Bollendonk, Grant Rose, Michael Schweitzer, Paul Harter
DIAGNOSTICS
The global variable, errno, will be set if Psem has a non-zero return code.
BUGS
Deadlock detection is not implemented
NAME
Vsem — atomically increment the value of a semaphore
SYNOPSIS
status = Vsem(sem_id, value)
int sem_id, value, status
DESCRIPTION
Vsem increments the value of a semaphore specified by sem_id, where sem_id is the unique
semaphore identifier returned from getsem. If the call is successful, the semaphore will be incre-
mented by value. If processes are currently suspended waiting to decrement this semaphore, zero
or more may be allowed to proceed based on their decrement values.
ERRORS
Vsem has a zero return value unless there has been an error, in which case the global value
errno will be set as follows.
[EFAULT] If sem_id is invalid
[EINVAL] If value <= 0
EXAMPLE
sem_id = 6; /* semaphore identifier returned from getsem */
value = 1;
status = Vsem(sem_id, value);
SEE ALSO
gtsem(2), Psem(2), vshare(2), vrlse(2)
AUTHOR
Greg Bollendonk, Grant Rose, Michael Schweitzer, Paul Harter
DIAGNOSTICS
The global variable, errno, will be set if Vsem has a non-zero return code.
BUGS
Deadlock detection is not implemented.
|
{"Source-Url": "http://www.cs.colorado.edu/department/publications/reports/docs/CU-CS-310-85.pdf", "len_cl100k_base": 10990, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 27432, "total-output-tokens": 12344, "length": "2e13", "weborganizer": {"__label__adult": 0.0003199577331542969, "__label__art_design": 0.00029206275939941406, "__label__crime_law": 0.0002853870391845703, "__label__education_jobs": 0.0007481575012207031, "__label__entertainment": 8.362531661987305e-05, "__label__fashion_beauty": 0.00013685226440429688, "__label__finance_business": 0.00026869773864746094, "__label__food_dining": 0.00033593177795410156, "__label__games": 0.0007734298706054688, "__label__hardware": 0.00261688232421875, "__label__health": 0.0004596710205078125, "__label__history": 0.0003306865692138672, "__label__home_hobbies": 0.0001100301742553711, "__label__industrial": 0.0005235671997070312, "__label__literature": 0.0002388954162597656, "__label__politics": 0.00023484230041503904, "__label__religion": 0.0005135536193847656, "__label__science_tech": 0.08905029296875, "__label__social_life": 8.511543273925781e-05, "__label__software": 0.01224517822265625, "__label__software_dev": 0.88916015625, "__label__sports_fitness": 0.00028777122497558594, "__label__transportation": 0.0006933212280273438, "__label__travel": 0.00022017955780029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52486, 0.0231]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52486, 0.19114]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52486, 0.90813]], "google_gemma-3-12b-it_contains_pii": [[0, 229, false], [229, 2577, null], [2577, 5375, null], [5375, 8338, null], [8338, 11398, null], [11398, 14336, null], [14336, 17110, null], [17110, 20176, null], [20176, 23015, null], [23015, 25940, null], [25940, 28978, null], [28978, 32142, null], [32142, 35361, null], [35361, 38246, null], [38246, 40358, null], [40358, 42098, null], [42098, 43464, null], [43464, 43560, null], [43560, 46195, null], [46195, 46390, null], [46390, 47690, null], [47690, 49459, null], [49459, 51455, null], [51455, 52486, null]], "google_gemma-3-12b-it_is_public_document": [[0, 229, true], [229, 2577, null], [2577, 5375, null], [5375, 8338, null], [8338, 11398, null], [11398, 14336, null], [14336, 17110, null], [17110, 20176, null], [20176, 23015, null], [23015, 25940, null], [25940, 28978, null], [28978, 32142, null], [32142, 35361, null], [35361, 38246, null], [38246, 40358, null], [40358, 42098, null], [42098, 43464, null], [43464, 43560, null], [43560, 46195, null], [46195, 46390, null], [46390, 47690, null], [47690, 49459, null], [49459, 51455, null], [51455, 52486, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52486, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52486, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52486, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52486, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52486, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52486, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52486, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52486, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52486, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52486, null]], "pdf_page_numbers": [[0, 229, 1], [229, 2577, 2], [2577, 5375, 3], [5375, 8338, 4], [8338, 11398, 5], [11398, 14336, 6], [14336, 17110, 7], [17110, 20176, 8], [20176, 23015, 9], [23015, 25940, 10], [25940, 28978, 11], [28978, 32142, 12], [32142, 35361, 13], [35361, 38246, 14], [38246, 40358, 15], [40358, 42098, 16], [42098, 43464, 17], [43464, 43560, 18], [43560, 46195, 19], [46195, 46390, 20], [46390, 47690, 21], [47690, 49459, 22], [49459, 51455, 23], [51455, 52486, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52486, 0.05063]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
13ef3c4bd01ac1ddcd8e886220d2ab2e443e2d2e
|
Multipurpose Internet Mail Extensions
(MIME) Part One:
Format of Internet Message Bodies
Status of this Memo
This document specifies an Internet standards track protocol for the
Internet community, and requests discussion and suggestions for
improvements. Please refer to the current edition of the "Internet
Official Protocol Standards" (STD 1) for the standardization state
and status of this protocol. Distribution of this memo is unlimited.
Abstract
STD 11, RFC 822, defines a message representation protocol specifying
considerable detail about US-ASCII message headers, and leaves the
message content, or message body, as flat US-ASCII text. This set of
documents, collectively called the Multipurpose Internet Mail
Extensions, or MIME, redefines the format of messages to allow for
(1) textual message bodies in character sets other than
US-ASCII,
(2) an extensible set of different formats for non-textual
message bodies,
(3) multi-part message bodies, and
(4) textual header information in character sets other than
US-ASCII.
These documents are based on earlier work documented in RFC 934, STD
11, and RFC 1049, but extends and revises them. Because RFC 822 said
so little about message bodies, these documents are largely
orthogonal to (rather than a revision of) RFC 822.
This initial document specifies the various headers used to describe
the structure of MIME messages. The second document, RFC 2046,
defines the general structure of the MIME media typing system and
defines an initial set of media types. The third document, RFC 2047,
describes extensions to RFC 822 to allow non-US-ASCII text data in
Internet mail header fields. The fourth document, RFC 2048, specifies various IANA registration procedures for MIME-related facilities. The fifth and final document, RFC 2049, describes MIME conformance criteria as well as providing some illustrative examples of MIME message formats, acknowledgements, and the bibliography.
These documents are revisions of RFCs 1521, 1522, and 1590, which themselves were revisions of RFCs 1341 and 1342. An appendix in RFC 2049 describes differences and changes from previous versions.
Table of Contents
1. Introduction ........................................... 3
2. Definitions, Conventions, and Generic BNF Grammar .... 5
2.1 CRLF ................................................ 5
2.2 Character Set ....................................... 6
2.3 Message ............................................ 6
2.4 Entity .............................................. 6
2.5 Body Part ........................................... 7
2.6 Body ................................................ 7
2.7 7bit Data .......................................... 7
2.8 8bit Data .......................................... 7
2.9 Binary Data ........................................ 7
2.10 Lines ............................................. 7
3. MIME Header Fields .................................... 8
4. MIME-Version Header Field ............................ 8
5. Content-Type Header Field ............................ 10
5.1 Syntax of the Content-Type Header Field .......... 12
5.2 Content-Type Defaults ............................. 14
6. Content-Transfer-Encoding Header Field ............... 14
6.1 Content-Transfer-Encoding Syntax ................ 14
6.2 Content-Transfer-Encodings Semantics ............. 15
6.3 New Content-Transfer-Encodings ................... 16
6.4 Interpretation and Use ............................ 16
6.5 Translating Encodings ............................. 18
6.6 Canonical Encoding Model ........................... 19
6.7 Quoted-Printable Content-Transfer-Encoding ...... 19
6.8 Base64 Content-Transfer-Encoding ................ 24
7. Content-ID Header Field ................................ 26
8. Content-Description Header Field ..................... 27
9. Additional MIME Header Fields ........................ 27
10. Summary ............................................... 27
11. Security Considerations ............................. 27
12. Authors’ Addresses ................................. 28
A. Collected Grammar ................................. 29
1. Introduction
Since its publication in 1982, RFC 822 has defined the standard format of textual mail messages on the Internet. Its success has been such that the RFC 822 format has been adopted, wholly or partially, well beyond the confines of the Internet and the Internet SMTP transport defined by RFC 821. As the format has seen wider use, a number of limitations have proven increasingly restrictive for the user community.
RFC 822 was intended to specify a format for text messages. As such, non-text messages, such as multimedia messages that might include audio or images, are simply not mentioned. Even in the case of text, however, RFC 822 is inadequate for the needs of mail users whose languages require the use of character sets richer than US-ASCII. Since RFC 822 does not specify mechanisms for mail containing audio, video, Asian language text, or even text in most European languages, additional specifications are needed.
One of the notable limitations of RFC 821/822 based mail systems is the fact that they limit the contents of electronic mail messages to relatively short lines (e.g. 1000 characters or less [RFC-821]) of 7bit US-ASCII. This forces users to convert any non-textual data that they may wish to send into seven-bit bytes representable as printable US-ASCII characters before invoking a local mail UA (User Agent, a program with which human users send and receive mail). Examples of such encodings currently used in the Internet include pure hexadecimal, uuencode, the 3-in-4 base 64 scheme specified in RFC 1421, the Andrew Toolkit Representation [ATK], and many others.
The limitations of RFC 822 mail become even more apparent as gateways are designed to allow for the exchange of mail messages between RFC 822 hosts and X.400 hosts. X.400 [X400] specifies mechanisms for the inclusion of non-textual material within electronic mail messages. The current standards for the mapping of X.400 messages to RFC 822 messages specify either that X.400 non-textual material must be converted to (not encoded in) IA5Text format, or that they must be discarded, notifying the RFC 822 user that discarding has occurred. This is clearly undesirable, as information that a user may wish to receive is lost. Even though a user agent may not have the capability of dealing with the non-textual material, the user might have some mechanism external to the UA that can extract useful information from the material. Moreover, it does not allow for the fact that the message may eventually be gatewayed back into an X.400 message handling system (i.e., the X.400 message is "tunneled" through Internet mail), where the non-textual information would definitely become useful again.
This document describes several mechanisms that combine to solve most of these problems without introducing any serious incompatibilities with the existing world of RFC 822 mail. In particular, it describes:
(1) A MIME-Version header field, which uses a version number to declare a message to be conformant with MIME and allows mail processing agents to distinguish between such messages and those generated by older or non-conformant software, which are presumed to lack such a field.
(2) A Content-Type header field, generalized from RFC 1049, which can be used to specify the media type and subtype of data in the body of a message and to fully specify the native representation (canonical form) of such data.
(3) A Content-Transfer-Encoding header field, which can be used to specify both the encoding transformation that was applied to the body and the domain of the result. Encoding transformations other than the identity transformation are usually applied to data in order to allow it to pass through mail transport mechanisms which may have data or character set limitations.
(4) Two additional header fields that can be used to further describe the data in a body, the Content-ID and Content-Description header fields.
All of the header fields defined in this document are subject to the general syntactic rules for header fields specified in RFC 822. In particular, all of these header fields except for Content-Disposition can include RFC 822 comments, which have no semantic content and should be ignored during MIME processing.
Finally, to specify and promote interoperability, RFC 2049 provides a basic applicability statement for a subset of the above mechanisms that defines a minimal level of "conformance" with this document.
HISTORICAL NOTE: Several of the mechanisms described in this set of documents may seem somewhat strange or even baroque at first reading. It is important to note that compatibility with existing standards AND robustness across existing practice were two of the highest priorities of the working group that developed this set of documents. In particular, compatibility was always favored over elegance.
2. Definitions, Conventions, and Generic BNF Grammar
Although the mechanisms specified in this set of documents are all described in prose, most are also described formally in the augmented BNF notation of RFC 822. Implementors will need to be familiar with this notation in order to understand this set of documents, and are referred to RFC 822 for a complete explanation of the augmented BNF notation.
Some of the augmented BNF in this set of documents makes named references to syntax rules defined in RFC 822. A complete formal grammar, then, is obtained by combining the collected grammar appendices in each document in this set with the BNF of RFC 822 plus the modifications to RFC 822 defined in RFC 1123 (which specifically changes the syntax for ‘return’, ‘date’ and ‘mailbox’).
All numeric and octet values are given in decimal notation in this set of documents. All media type values, subtype values, and parameter names as defined are case-insensitive. However, parameter values are case-sensitive unless otherwise specified for the specific parameter.
FORMATTING NOTE: Notes, such as this one, provide additional nonessential information which may be skipped by the reader without missing anything essential. The primary purpose of these non-essential notes is to convey information about the rationale of this set of documents, or to place these documents in the proper historical or evolutionary context. Such information may in particular be skipped by those who are focused entirely on building a conformant implementation, but may be of use to those who wish to understand why certain design choices were made.
2.1. CRLF
The term CRLF, in this set of documents, refers to the sequence of octets corresponding to the two US-ASCII characters CR (decimal value 13) and LF (decimal value 10) which, taken together, in this order, denote a line break in RFC 822 mail.
2.2. Character Set
The term "character set" is used in MIME to refer to a method of converting a sequence of octets into a sequence of characters. Note that unconditional and unambiguous conversion in the other direction is not required, in that not all characters may be representable by a given character set and a character set may provide more than one sequence of octets to represent a particular sequence of characters.
This definition is intended to allow various kinds of character encodings, from simple single-table mappings such as US-ASCII to complex table switching methods such as those that use ISO 2022’s techniques, to be used as character sets. However, the definition associated with a MIME character set name must fully specify the mapping to be performed. In particular, use of external profiling information to determine the exact mapping is not permitted.
NOTE: The term "character set" was originally to describe such straightforward schemes as US-ASCII and ISO-8859-1 which have a simple one-to-one mapping from single octets to single characters. Multi-octet coded character sets and switching techniques make the situation more complex. For example, some communities use the term "character encoding" for what MIME calls a "character set", while using the phrase "coded character set" to denote an abstract mapping from integers (not octets) to characters.
2.3. Message
The term "message", when not further qualified, means either a (complete or "top-level") RFC 822 message being transferred on a network, or a message encapsulated in a body of type "message/rfc822" or "message/partial".
2.4. Entity
The term "entity", refers specifically to the MIME-defined header fields and contents of either a message or one of the parts in the body of a multipart entity. The specification of such entities is the essence of MIME. Since the contents of an entity are often called the "body", it makes sense to speak about the body of an entity. Any sort of field may be present in the header of an entity, but only those fields whose names begin with "content-" actually have any MIME-related meaning. Note that this does NOT imply that they have no meaning at all -- an entity that is also a message has non-MIME header fields whose meanings are defined by RFC 822.
2.5. Body Part
The term "body part" refers to an entity inside of a multipart entity.
2.6. Body
The term "body", when not further qualified, means the body of an entity, that is, the body of either a message or of a body part.
NOTE: The previous four definitions are clearly circular. This is unavoidable, since the overall structure of a MIME message is indeed recursive.
2.7. 7bit Data
"7bit data" refers to data that is all represented as relatively short lines with 998 octets or less between CRLF line separation sequences [RFC-821]. No octets with decimal values greater than 127 are allowed and neither are NULs (octets with decimal value 0). CR (decimal value 13) and LF (decimal value 10) octets only occur as part of CRLF line separation sequences.
2.8. 8bit Data
"8bit data" refers to data that is all represented as relatively short lines with 998 octets or less between CRLF line separation sequences [RFC-821]), but octets with decimal values greater than 127 may be used. As with "7bit data" CR and LF octets only occur as part of CRLF line separation sequences and no NULs are allowed.
2.9. Binary Data
"Binary data" refers to data where any sequence of octets whatsoever is allowed.
2.10. Lines
"Lines" are defined as sequences of octets separated by a CRLF sequences. This is consistent with both RFC 821 and RFC 822. "Lines" only refers to a unit of data in a message, which may or may not correspond to something that is actually displayed by a user agent.
3. MIME Header Fields
MIME defines a number of new RFC 822 header fields that are used to describe the content of a MIME entity. These header fields occur in at least two contexts:
(1) As part of a regular RFC 822 message header.
(2) In a MIME body part header within a multipart construct.
The formal definition of these header fields is as follows:
entity-headers := [ content CRLF ]
[ encoding CRLF ]
[ id CRLF ]
[ description CRLF ]
*( MIME-extension-field CRLF )
MIME-message-headers := entity-headers
fields
version CRLF
; The ordering of the header
; fields implied by this BNF
; definition should be ignored.
MIME-part-headers := entity-headers
[ fields ]
; Any field not beginning with
; "content-" can have no defined
; meaning and may be ignored.
; The ordering of the header
; fields implied by this BNF
; definition should be ignored.
The syntax of the various specific MIME header fields will be described in the following sections.
4. MIME-Version Header Field
Since RFC 822 was published in 1982, there has really been only one format standard for Internet messages, and there has been little perceived need to declare the format standard in use. This document is an independent specification that complements RFC 822. Although the extensions in this document have been defined in such a way as to be compatible with RFC 822, there are still circumstances in which it might be desirable for a mail-processing agent to know whether a message was composed with the new standard in mind.
Therefore, this document defines a new header field, "MIME-Version", which is to be used to declare the version of the Internet message body format standard in use.
Messages composed in accordance with this document MUST include such a header field, with the following verbatim text:
MIME-Version: 1.0
The presence of this header field is an assertion that the message has been composed in compliance with this document.
Since it is possible that a future document might extend the message format standard again, a formal BNF is given for the content of the MIME-Version field:
```
version := "MIME-Version" "":" 1*DIGIT "." 1*DIGIT
```
Thus, future format specifiers, which might replace or extend "1.0", are constrained to be two integer fields, separated by a period. If a message is received with a MIME-version value other than "1.0", it cannot be assumed to conform with this document.
Note that the MIME-Version header field is required at the top level of a message. It is not required for each body part of a multipart entity. It is required for the embedded headers of a body of type "message/rfc822" or "message/partial" if and only if the embedded message is itself claimed to be MIME-conformant.
It is not possible to fully specify how a mail reader that conforms with MIME as defined in this document should treat a message that might arrive in the future with some value of MIME-Version other than "1.0".
It is also worth noting that version control for specific media types is not accomplished using the MIME-Version mechanism. In particular, some formats (such as application/postscript) have version numbering conventions that are internal to the media format. Where such conventions exist, MIME does nothing to supersede them. Where no such conventions exist, a MIME media type might use a "version" parameter in the content-type field if necessary.
NOTE TO IMPLEMENTORS: When checking MIME-Version values any RFC 822 comment strings that are present must be ignored. In particular, the following four MIME-Version fields are equivalent:
MIME-Version: 1.0
MIME-Version: 1.0 (produced by MetaSend Vx.x)
MIME-Version: (produced by MetaSend Vx.x) 1.0
MIME-Version: 1.(produced by MetaSend Vx.x)0
In the absence of a MIME-Version field, a receiving mail user agent (whether conforming to MIME requirements or not) may optionally choose to interpret the body of the message according to local conventions. Many such conventions are currently in use and it should be noted that in practice non-MIME messages can contain just about anything.
It is impossible to be certain that a non-MIME mail message is actually plain text in the US-ASCII character set since it might well be a message that, using some set of nonstandard local conventions that predate MIME, includes text in another character set or non-textual data presented in a manner that cannot be automatically recognized (e.g., a uuencoded compressed UNIX tar file).
5. Content-Type Header Field
The purpose of the Content-Type field is to describe the data contained in the body fully enough that the receiving user agent can pick an appropriate agent or mechanism to present the data to the user, or otherwise deal with the data in an appropriate manner. The value in this field is called a media type.
HISTORICAL NOTE: The Content-Type header field was first defined in RFC 1049. RFC 1049 used a simpler and less powerful syntax, but one that is largely compatible with the mechanism given here.
The Content-Type header field specifies the nature of the data in the body of an entity by giving media type and subtype identifiers, and by providing auxiliary information that may be required for certain media types. After the media type and subtype names, the remainder of the header field is simply a set of parameters, specified in an attribute=value notation. The ordering of parameters is not significant.
In general, the top-level media type is used to declare the general type of data, while the subtype specifies a specific format for that type of data. Thus, a media type of "image/xyz" is enough to tell a user agent that the data is an image, even if the user agent has no knowledge of the specific image format "xyz". Such information can be used, for example, to decide whether or not to show a user the raw data from an unrecognized subtype -- such an action might be reasonable for unrecognized subtypes of text, but not for unrecognized subtypes of image or audio. For this reason, registered subtypes of text, image, audio, and video should not contain embedded information that is really of a different type. Such compound formats should be represented using the "multipart" or "application" types.
Parameters are modifiers of the media subtype, and as such do not fundamentally affect the nature of the content. The set of meaningful parameters depends on the media type and subtype. Most parameters are associated with a single specific subtype. However, a given top-level media type may define parameters which are applicable to any subtype of that type. Parameters may be required by their defining content type or subtype or they may be optional. MIME implementations must ignore any parameters whose names they do not recognize.
For example, the "charset" parameter is applicable to any subtype of "text", while the "boundary" parameter is required for any subtype of the "multipart" media type.
There are NO globally-meaningful parameters that apply to all media types. Truly global mechanisms are best addressed, in the MIME model, by the definition of additional Content-* header fields.
An initial set of seven top-level media types is defined in RFC 2046. Five of these are discrete types whose content is essentially opaque as far as MIME processing is concerned. The remaining two are composite types whose contents require additional handling by MIME processors.
This set of top-level media types is intended to be substantially complete. It is expected that additions to the larger set of supported types can generally be accomplished by the creation of new subtypes of these initial types. In the future, more top-level types may be defined only by a standards-track extension to this standard. If another top-level type is to be used for any reason, it must be given a name starting with "X-" to indicate its non-standard status and to avoid a potential conflict with a future official name.
5.1. Syntax of the Content-Type Header Field
In the Augmented BNF notation of RFC 822, a Content-Type header field value is defined as follows:
```
content := "Content-Type" ":" type "/" subtype
*(";" parameter)
; Matching of media type and subtype
; is ALWAYS case-insensitive.
type := discrete-type / composite-type
discrete-type := "text" / "image" / "audio" / "video" /
"application" / extension-token
composite-type := "message" / "multipart" / extension-token
extension-token := ietf-token / x-token
ietf-token := <An extension token defined by a
standards-track RFC and registered
with IANA.>
x-token := <The two characters "X-" or "x-" followed, with
no intervening white space, by any token>
subtype := extension-token / iana-token
iana-token := <A publicly-defined extension token. Tokens
of this form must be registered with IANA
as specified in RFC 2048.>
parameter := attribute "=" value
attribute := token
; Matching of attributes
; is ALWAYS case-insensitive.
value := token / quoted-string
token := 1*<any (US-ASCII) CHAR except SPACE, CTLs,
or tspecials>
tspecials := "(" / ")" / "<" / ">
/ "@" / "," / "." / ";" / "\" / ">
/ ";" / ";" / ";" / ";" / ";" / ";=
; Must be in quoted-string,
; to use within parameter values
```
Note that the definition of "tspecials" is the same as the RFC 822 definition of "specials" with the addition of the three characters "/", "?", and ",", and the removal of ".".
Note also that a subtype specification is MANDATORY -- it may not be omitted from a Content-Type header field. As such, there are no default subtypes.
The type, subtype, and parameter names are not case sensitive. For example, TEXT, Text, and TeXt are all equivalent top-level media types. Parameter values are normally case sensitive, but sometimes are interpreted in a case-insensitive fashion, depending on the intended use. (For example, multipart boundaries are case-sensitive, but the "access-type" parameter for message/External-body is not case-sensitive.)
Note that the value of a quoted string parameter does not include the quotes. That is, the quotation marks in a quoted-string are not a part of the value of the parameter, but are merely used to delimit that parameter value. In addition, comments are allowed in accordance with RFC 822 rules for structured header fields. Thus the following two forms
```
Content-type: text/plain; charset=us-ascii (Plain text)
Content-type: text/plain; charset="us-ascii"
```
are completely equivalent.
Beyond this syntax, the only syntactic constraint on the definition of subtype names is the desire that their uses must not conflict. That is, it would be undesirable to have two different communities using "Content-Type: application/foobar" to mean two different things. The process of defining new media subtypes, then, is not intended to be a mechanism for imposing restrictions, but simply a mechanism for publicizing their definition and usage. There are, therefore, two acceptable mechanisms for defining new media subtypes:
1. Private values (starting with "X-") may be defined bilaterally between two cooperating agents without outside registration or standardization. Such values cannot be registered or standardized.
2. New standard values should be registered with IANA as described in RFC 2048.
The second document in this set, RFC 2046, defines the initial set of media types for MIME.
5.2. Content-Type Defaults
Default RFC 822 messages without a MIME Content-Type header are taken by this protocol to be plain text in the US-ASCII character set, which can be explicitly specified as:
Content-type: text/plain; charset=us-ascii
This default is assumed if no Content-Type header field is specified. It is also recommended that this default be assumed when a syntactically invalid Content-Type header field is encountered. In the presence of a MIME-Version header field and the absence of any Content-Type header field, a receiving User Agent can also assume that plain US-ASCII text was the sender’s intent. Plain US-ASCII text may still be assumed in the absence of a MIME-Version or the presence of a syntactically invalid Content-Type header field, but the sender’s intent might have been otherwise.
6. Content-Transfer-Encoding Header Field
Many media types which could be usefully transported via email are represented, in their "natural" format, as 8bit character or binary data. Such data cannot be transmitted over some transfer protocols. For example, RFC 821 (SMTP) restricts mail messages to 7bit US-ASCII data with lines no longer than 1000 characters including any trailing CRLF line separator.
It is necessary, therefore, to define a standard mechanism for encoding such data into a 7bit short line format. Proper labelling of unencoded material in less restrictive formats for direct use over less restrictive transports is also desirable. This document specifies that such encodings will be indicated by a new "Content-Transfer-Encoding" header field. This field has not been defined by any previous standard.
6.1. Content-Transfer-Encoding Syntax
The Content-Transfer-Encoding field’s value is a single token specifying the type of encoding, as enumerated below. Formally:
encoding ::= "Content-Transfer-Encoding" "=" mechanism
mechanism ::= "7bit" / "8bit" / "binary" /
"quoted-printable" / "base64" /
ietf-token / x-token
These values are not case sensitive -- Base64 and BASE64 and bAsE64 are all equivalent. An encoding type of 7BIT requires that the body...
is already in a 7bit mail-ready representation. This is the default value -- that is, "Content-Transfer-Encoding: 7BIT" is assumed if the Content-Transfer-Encoding header field is not present.
6.2. Content-Transfer-Encodings Semantics
This single Content-Transfer-Encoding token actually provides two pieces of information. It specifies what sort of encoding transformation the body was subjected to and hence what decoding operation must be used to restore it to its original form, and it specifies what the domain of the result is.
The transformation part of any Content-Transfer-Encodings specifies, either explicitly or implicitly, a single, well-defined decoding algorithm, which for any sequence of encoded octets either transforms it to the original sequence of octets which was encoded, or shows that it is illegal as an encoded sequence. Content-Transfer-Encodings transformations never depend on any additional external profile information for proper operation. Note that while decoders must produce a single, well-defined output for a valid encoding no such restrictions exist for encoders: Encoding a given sequence of octets to different, equivalent encoded sequences is perfectly legal.
Three transformations are currently defined: identity, the "quoted-printable" encoding, and the "base64" encoding. The domains are "binary", "8bit" and "7bit".
The Content-Transfer-Encoding values "7bit", "8bit", and "binary" all mean that the identity (i.e. NO) encoding transformation has been performed. As such, they serve simply as indicators of the domain of the body data, and provide useful information about the sort of encoding that might be needed for transmission in a given transport system. The terms "7bit data", "8bit data", and "binary data" are all defined in Section 2.
The quoted-printable and base64 encodings transform their input from an arbitrary domain into material in the "7bit" range, thus making it safe to carry over restricted transports. The specific definition of the transformations are given below.
The proper Content-Transfer-Encoding label must always be used. Labelling unencoded data containing 8bit characters as "7bit" is not allowed, nor is labelling unencoded non-line-oriented data as anything other than "binary" allowed.
Unlike media subtypes, a proliferation of Content-Transfer-Encoding values is both undesirable and unnecessary. However, establishing only a single transformation into the "7bit" domain does not seem
possible. There is a tradeoff between the desire for a compact and efficient encoding of largely-binary data and the desire for a somewhat readable encoding of data that is mostly, but not entirely, 7bit. For this reason, at least two encoding mechanisms are necessary: a more or less readable encoding (quoted-printable) and a "dense" or "uniform" encoding (base64).
Mail transport for unencoded 8bit data is defined in RFC 1652. As of the initial publication of this document, there are no standardized Internet mail transports for which it is legitimate to include unencoded binary data in mail bodies. Thus there are no circumstances in which the "binary" Content-Transfer-Encoding is actually valid in Internet mail. However, in the event that binary mail transport becomes a reality in Internet mail, or when MIME is used in conjunction with any other binary-capable mail transport mechanism, binary bodies must be labelled as such using this mechanism.
NOTE: The five values defined for the Content-Transfer-Encoding field imply nothing about the media type other than the algorithm by which it was encoded or the transport system requirements if unencoded.
6.3. New Content-Transfer-Encodings
Implementors may, if necessary, define private Content-Transfer-Encoding values, but must use an x-token, which is a name prefixed by "X-", to indicate its non-standard status, e.g., "Content-Transfer-Encoding: x-my-new-encoding". Additional standardized Content-Transfer-Encoding values must be specified by a standards-track RFC. The requirements such specifications must meet are given in RFC 2048. As such, all content-transfer-encoding namespace except that beginning with "X-" is explicitly reserved to the IETF for future use.
Unlike media types and subtypes, the creation of new Content-Transfer-Encoding values is STRONGLY discouraged, as it seems likely to hinder interoperability with little potential benefit.
6.4. Interpretation and Use
If a Content-Transfer-Encoding header field appears as part of a message header, it applies to the entire body of that message. If a Content-Transfer-Encoding header field appears as part of an entity’s headers, it applies only to the body of that entity. If an entity is of type "multipart" the Content-Transfer-Encoding is not permitted to have any value other than "7bit", "8bit" or "binary". Even more severe restrictions apply to some subtypes of the "message" type.
It should be noted that most media types are defined in terms of octets rather than bits, so that the mechanisms described here are mechanisms for encoding arbitrary octet streams, not bit streams. If a bit stream is to be encoded via one of these mechanisms, it must first be converted to an 8bit byte stream using the network standard bit order ("big-endian"), in which the earlier bits in a stream become the higher-order bits in an 8bit byte. A bit stream not ending at an 8bit boundary must be padded with zeroes. RFC 2046 provides a mechanism for noting the addition of such padding in the case of the application/octet-stream media type, which has a "padding" parameter.
The encoding mechanisms defined here explicitly encode all data in US-ASCII. Thus, for example, suppose an entity has header fields such as:
Content-Type: text/plain; charset=ISO-8859-1
Content-transfer-encoding: base64
This must be interpreted to mean that the body is a base64 US-ASCII encoding of data that was originally in ISO-8859-1, and will be in that character set again after decoding.
Certain Content-Transfer-Encoding values may only be used on certain media types. In particular, it is EXPRESSLY FORBIDDEN to use any encodings other than "7bit", "8bit", or "binary" with any composite media type, i.e. one that recursively includes other Content-Type fields. Currently the only composite media types are "multipart" and "message". All encodings that are desired for bodies of type multipart or message must be done at the innermost level, by encoding the actual body that needs to be encoded.
It should also be noted that, by definition, if a composite entity has a transfer-encoding value such as "7bit", but one of the enclosed entities has a less restrictive value such as "8bit", then either the outer "7bit" labelling is in error, because 8bit data are included, or the inner "8bit" labelling placed an unnecessarily high demand on the transport system because the actual included data were actually 7bit-safe.
NOTE ON ENCODING RESTRICTIONS: Though the prohibition against using content-transfer-encodings on composite body data may seem overly restrictive, it is necessary to prevent nested encodings, in which data are passed through an encoding algorithm multiple times, and must be decoded multiple times in order to be properly viewed. Nested encodings add considerable complexity to user agents: Aside from the obvious efficiency problems with such multiple encodings, they can obscure the basic structure of a message. In particular, they can imply that several decoding operations are necessary simply
to find out what types of bodies a message contains. Banning nested encodings may complicate the job of certain mail gateways, but this seems less of a problem than the effect of nested encodings on user agents.
Any entity with an unrecognized Content-Transfer-Encoding must be treated as if it has a Content-Type of "application/octet-stream", regardless of what the Content-Type header field actually says.
NOTE ON THE RELATIONSHIP BETWEEN CONTENT-TYPE AND CONTENT-TRANSFER-ENCODING: It may seem that the Content-Transfer-Encoding could be inferred from the characteristics of the media that is to be encoded, or, at the very least, that certain Content-Transfer-Encodings could be mandated for use with specific media types. There are several reasons why this is not the case. First, given the varying types of transports used for mail, some encodings may be appropriate for some combinations of media types and transports but not for others. (For example, in an 8bit transport, no encoding would be required for text in certain character sets, while such encodings are clearly required for 7bit SMTP.)
Second, certain media types may require different types of transfer encoding under different circumstances. For example, many PostScript bodies might consist entirely of short lines of 7bit data and hence require no encoding at all. Other PostScript bodies (especially those using Level 2 PostScript’s binary encoding mechanism) may only be reasonably represented using a binary transport encoding. Finally, since the Content-Type field is intended to be an open-ended specification mechanism, strict specification of an association between media types and encodings effectively couples the specification of an application protocol with a specific lower-level transport. This is not desirable since the developers of a media type should not have to be aware of all the transports in use and what their limitations are.
6.5. Translating Encodings
The quoted-printable and base64 encodings are designed so that conversion between them is possible. The only issue that arises in such a conversion is the handling of hard line breaks in quoted-printable encoding output. When converting from quoted-printable to base64 a hard line break in the quoted-printable form represents a CRLF sequence in the canonical form of the data. It must therefore be converted to a corresponding encoded CRLF in the base64 form of the data. Similarly, a CRLF sequence in the canonical form of the data obtained after base64 decoding must be converted to a quoted-printable hard line break, but ONLY when converting text data.
6.6. Canonical Encoding Model
There was some confusion, in the previous versions of this RFC, regarding the model for when email data was to be converted to canonical form and encoded, and in particular how this process would affect the treatment of CRLF's, given that the representation of newlines varies greatly from system to system, and the relationship between content-transfer-encodings and character sets. A canonical model for encoding is presented in RFC 2049 for this reason.
6.7. Quoted-Printable Content-Transfer-Encoding
The Quoted-Printable encoding is intended to represent data that largely consists of octets that correspond to printable characters in the US-ASCII character set. It encodes the data in such a way that the resulting octets are unlikely to be modified by mail transport. If the data being encoded are mostly US-ASCII text, the encoded form of the data remains largely recognizable by humans. A body which is entirely US-ASCII may also be encoded in Quoted-Printable to ensure the integrity of the data should the message pass through a character-translating, and/or line-wrapping gateway.
In this encoding, octets are to be represented as determined by the following rules:
1. (General 8bit representation) Any octet, except a CR or LF that is part of a CRLF line break of the canonical (standard) form of the data being encoded, may be represented by an "=" followed by a two digit hexadecimal representation of the octet’s value. The digits of the hexadecimal alphabet, for this purpose, are "0123456789ABCDEF". Uppercase letters must be used; lowercase letters are not allowed. Thus, for example, the decimal value 12 (US-ASCII form feed) can be represented by "=0C", and the decimal value 61 (US-ASCII EQUAL SIGN) can be represented by "=3D". This rule must be followed except when the following rules allow an alternative encoding.
2. (Literal representation) Octets with decimal values of 33 through 60 inclusive, and 62 through 126, inclusive, MAY be represented as the US-ASCII characters which correspond to those octets (EXCLAMATION POINT through LESS THAN, and GREATER THAN through TILDE, respectively).
3. (White Space) Octets with values of 9 and 32 MAY be represented as US-ASCII TAB (HT) and SPACE characters,
respectively, but MUST NOT be so represented at the end of an encoded line. Any TAB (HT) or SPACE characters on an encoded line MUST thus be followed on that line by a printable character. In particular, an "=" at the end of an encoded line, indicating a soft line break (see rule #5) may follow one or more TAB (HT) or SPACE characters. It follows that an octet with decimal value 9 or 32 appearing at the end of an encoded line must be represented according to Rule #1. This rule is necessary because some MTAs (Message Transport Agents, programs which transport messages from one user to another, or perform a portion of such transfers) are known to pad lines of text with SPACES, and others are known to remove "white space" characters from the end of a line. Therefore, when decoding a Quoted-Printable body, any trailing white space on a line must be deleted, as it will necessarily have been added by intermediate transport agents.
(4) (Line Breaks) A line break in a text body, represented as a CRLF sequence in the text canonical form, must be represented by a (RFC 822) line break, which is also a CRLF sequence, in the Quoted-Printable encoding. Since the canonical representation of media types other than text do not generally include the representation of line breaks as CRLF sequences, no hard line breaks (i.e. line breaks that are intended to be meaningful and to be displayed to the user) can occur in the quoted-printable encoding of such types. Sequences like "=0D", "=0A", "=0A=0D" and "=0D=0A" will routinely appear in non-text data represented in quoted-printable, of course.
Note that many implementations may elect to encode the local representation of various content types directly rather than converting to canonical form first, encoding, and then converting back to local representation. In particular, this may apply to plain text material on systems that use newline conventions other than a CRLF terminator sequence. Such an implementation optimization is permissible, but only when the combined canonicalization-encoding step is equivalent to performing the three steps separately.
(5) (Soft Line Breaks) The Quoted-Printable encoding REQUIRES that encoded lines be no more than 76 characters long. If longer lines are to be encoded with the Quoted-Printable encoding, "soft" line breaks
must be used. An equal sign as the last character on a encoded line indicates such a non-significant ("soft") line break in the encoded text.
Thus if the "raw" form of the line is a single unencoded line that says:
Now’s the time for all folk to come to the aid of their country.
This can be represented, in the Quoted-Printable encoding, as:
Now’s the time =
for all folk to come=
to the aid of their country.
This provides a mechanism with which long lines are encoded in such a way as to be restored by the user agent. The 76 character limit does not count the trailing CRLF, but counts all other characters, including any equal signs.
Since the hyphen character ("-") may be represented as itself in the Quoted-Printable encoding, care must be taken, when encapsulating a quoted-printable encoded body inside one or more multipart entities, to ensure that the boundary delimiter does not appear anywhere in the encoded body. (A good strategy is to choose a boundary that includes a character sequence such as "=_" which can never appear in a quoted-printable body. See the definition of multipart messages in RFC 2046.)
NOTE: The quoted-printable encoding represents something of a compromise between readability and reliability in transport. Bodies encoded with the quoted-printable encoding will work reliably over most mail gateways, but may not work perfectly over a few gateways, notably those involving translation into EBCDIC. A higher level of confidence is offered by the base64 Content-Transfer-Encoding. A way to get reasonably reliable transport through EBCDIC gateways is to also quote the US-ASCII characters
!"#$@\^'\{|}~
according to rule #1.
Because quoted-printable data is generally assumed to be line-oriented, it is to be expected that the representation of the breaks between the lines of quoted-printable data may be altered in transport, in the same manner that plain text mail has always been altered in Internet mail when passing between systems with differing newline conventions. If such alterations are likely to constitute a
corruption of the data, it is probably more sensible to use the base 64 encoding rather than the quoted-printable encoding.
NOTE: Several kinds of substrings cannot be generated according to the encoding rules for the quoted-printable content-transfer-encoding, and hence are formally illegal if they appear in the output of a quoted-printable encoder. This note enumerates these cases and suggests ways to handle such illegal substrings if any are encountered in quoted-printable data that is to be decoded.
(1) An "=" followed by two hexadecimal digits, one or both of which are lowercase letters in "abcdef", is formally illegal. A robust implementation might choose to recognize them as the corresponding uppercase letters.
(2) An "=" followed by a character that is neither a hexadecimal digit (including "abcdef") nor the CR character of a CRLF pair is illegal. This case can be the result of US-ASCII text having been included in a quoted-printable part of a message without itself having been subjected to quoted-printable encoding. A reasonable approach by a robust implementation might be to include the "=" character and the following character in the decoded data without any transformation and, if possible, indicate to the user that proper decoding was not possible at this point in the data.
(3) An "=" cannot be the ultimate or penultimate character in an encoded object. This could be handled as in case (2) above.
(4) Control characters other than TAB, or CR and LF as parts of CRLF pairs, must not appear. The same is true for octets with decimal values greater than 126. If found in incoming quoted-printable data by a decoder, a robust implementation might exclude them from the decoded data and warn the user that illegal characters were discovered.
(5) Encoded lines must not be longer than 76 characters, not counting the trailing CRLF. If longer lines are found in incoming, encoded data, a robust implementation might nevertheless decode the lines, and might report the erroneous encoding to the user.
WARNING TO IMPLEMENTORS: If binary data is encoded in quoted-printable, care must be taken to encode CR and LF characters as "=0D" and "=0A", respectively. In particular, a CRLF sequence in binary data should be encoded as "=0D=0A". Otherwise, if CRLF were represented as a hard line break, it might be incorrectly decoded on platforms with different line break conventions.
For formalists, the syntax of quoted-printable data is described by the following grammar:
\[
\text{quoted-printable} := \text{qp-line} *(\text{CRLF qp-line})
\]
\[
\text{qp-line} := *((\text{qp-segment transport-padding CRLF})
\]
\[
\text{qp-part} := \text{qp-section}
\]
\[
\text{qp-segment} := \text{qp-section} *(\text{SPACE / TAB}) "=
\]
\[
\text{qp-section} := [*(\text{ptext / SPACE / TAB}) \text{ ptext}]
\]
\[
\text{ptext} := \text{hex-octet / safe-char}
\]
\[
\text{safe-char} := <\text{any octet with decimal value of 33 through 60 inclusive, and 62 through 126}>
\]
\[
\text{hex-octet} := "=" 2(\text{DIGIT / "A" / "B" / "C" / "D" / "E" / "F"})
\]
\[
\text{transport-padding} := *\text{LWSP-char}
\]
IMPORTANT: The addition of LWSP between the elements shown in this BNF is NOT allowed since this BNF does not specify a structured header field.
6.8. Base64 Content-Transfer-Encoding
The Base64 Content-Transfer-Encoding is designed to represent arbitrary sequences of octets in a form that need not be humanly readable. The encoding and decoding algorithms are simple, but the encoded data are consistently only about 33 percent larger than the unencoded data. This encoding is virtually identical to the one used in Privacy Enhanced Mail (PEM) applications, as defined in RFC 1421.
A 65-character subset of US-ASCII is used, enabling 6 bits to be represented per printable character. (The extra 65th character, "=" is used to signify a special processing function.)
NOTE: This subset has the important property that it is represented identically in all versions of ISO 646, including US-ASCII, and all characters in the subset are also represented identically in all versions of EBCDIC. Other popular encodings, such as the encoding used by the uuencode utility, Macintosh binhex 4.0 [RFC-1741], and the base85 encoding specified as part of Level 2 PostScript, do not share these properties, and thus do not fulfill the portability requirements a binary transport encoding for mail must meet.
The encoding process represents 24-bit groups of input bits as output strings of 4 encoded characters. Proceeding from left to right, a 24-bit input group is formed by concatenating 3 8bit input groups. These 24 bits are then treated as 4 concatenated 6-bit groups, each of which is translated into a single digit in the base64 alphabet. When encoding a bit stream via the base64 encoding, the bit stream must be presumed to be ordered with the most-significant-bit first. That is, the first bit in the stream will be the high-order bit in the first 8bit byte, and the eighth bit will be the low-order bit in the first 8bit byte, and so on.
Each 6-bit group is used as an index into an array of 64 printable characters. The character referenced by the index is placed in the output string. These characters, identified in Table 1, below, are selected so as to be universally representable, and the set excludes characters with particular significance to SMTP (e.g., ".", CR, LF) and to the multipart boundary delimiters defined in RFC 2046 (e.g., "-" ).
Table 1: The Base64 Alphabet
<table>
<thead>
<tr>
<th>Value Encoding</th>
<th>Value Encoding</th>
<th>Value Encoding</th>
<th>Value Encoding</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 A</td>
<td>17 R</td>
<td>34 i</td>
<td>51 z</td>
</tr>
<tr>
<td>1 B</td>
<td>18 S</td>
<td>35 j</td>
<td>52 0</td>
</tr>
<tr>
<td>2 C</td>
<td>19 T</td>
<td>36 k</td>
<td>53 1</td>
</tr>
<tr>
<td>3 D</td>
<td>20 U</td>
<td>37 l</td>
<td>54 2</td>
</tr>
<tr>
<td>4 E</td>
<td>21 V</td>
<td>38 m</td>
<td>55 3</td>
</tr>
<tr>
<td>5 F</td>
<td>22 W</td>
<td>39 n</td>
<td>56 4</td>
</tr>
<tr>
<td>6 G</td>
<td>23 X</td>
<td>40 o</td>
<td>57 5</td>
</tr>
<tr>
<td>7 H</td>
<td>24 Y</td>
<td>41 p</td>
<td>58 6</td>
</tr>
<tr>
<td>8 I</td>
<td>25 Z</td>
<td>42 q</td>
<td>59 7</td>
</tr>
<tr>
<td>9 J</td>
<td>26 a</td>
<td>43 r</td>
<td>60 8</td>
</tr>
<tr>
<td>10 K</td>
<td>27 b</td>
<td>44 s</td>
<td>61 9</td>
</tr>
<tr>
<td>11 L</td>
<td>28 c</td>
<td>45 t</td>
<td>62 +</td>
</tr>
<tr>
<td>12 M</td>
<td>29 d</td>
<td>46 u</td>
<td>63 /</td>
</tr>
<tr>
<td>13 N</td>
<td>30 e</td>
<td>47 v</td>
<td></td>
</tr>
<tr>
<td>14 O</td>
<td>31 f</td>
<td>48 w</td>
<td>(pad) =</td>
</tr>
<tr>
<td>15 P</td>
<td>32 g</td>
<td>49 x</td>
<td></td>
</tr>
<tr>
<td>16 Q</td>
<td>33 h</td>
<td>50 y</td>
<td></td>
</tr>
</tbody>
</table>
The encoded output stream must be represented in lines of no more than 76 characters each. All line breaks or other characters not found in Table 1 must be ignored by decoding software. In base64 data, characters other than those in Table 1, line breaks, and other white space probably indicate a transmission error, about which a warning message or even a message rejection might be appropriate under some circumstances.
Special processing is performed if fewer than 24 bits are available at the end of the data being encoded. A full encoding quantum is always completed at the end of a body. When fewer than 24 input bits are available in an input group, zero bits are added (on the right) to form an integral number of 6-bit groups. Padding at the end of the data is performed using the "=" character. Since all base64 input is an integral number of octets, only the following cases can arise: (1) the final quantum of encoding input is an integral multiple of 24 bits; here, the final unit of encoded output will be an integral multiple of 4 characters with no "=" padding, (2) the final quantum of encoding input is exactly 8 bits; here, the final unit of encoded output will be two characters followed by two "=" padding characters, or (3) the final quantum of encoding input is exactly 16 bits; here, the final unit of encoded output will be three characters followed by one "=" padding character.
Because it is used only for padding at the end of the data, the occurrence of any "=" characters may be taken as evidence that the end of the data has been reached (without truncation in transit). No
such assurance is possible, however, when the number of octets transmitted was a multiple of three and no "=" characters are present.
Any characters outside of the base64 alphabet are to be ignored in base64-encoded data.
Care must be taken to use the proper octets for line breaks if base64 encoding is applied directly to text material that has not been converted to canonical form. In particular, text line breaks must be converted into CRLF sequences prior to base64 encoding. The important thing to note is that this may be done directly by the encoder rather than in a prior canonicalization step in some implementations.
NOTE: There is no need to worry about quoting potential boundary delimiters within base64-encoded bodies within multipart entities because no hyphen characters are used in the base64 encoding.
7. Content-ID Header Field
In constructing a high-level user agent, it may be desirable to allow one body to make reference to another. Accordingly, bodies may be labelled using the "Content-ID" header field, which is syntactically identical to the "Message-ID" header field:
id := "Content-ID" "::" msg-id
Like the Message-ID values, Content-ID values must be generated to be world-unique.
The Content-ID value may be used for uniquely identifying MIME entities in several contexts, particularly for caching data referenced by the message/external-body mechanism. Although the Content-ID header is generally optional, its use is MANDATORY in implementations which generate data of the optional MIME media type "message/external-body". That is, each message/external-body entity must have a Content-ID field to permit caching of such data.
It is also worth noting that the Content-ID value has special semantics in the case of the multipart/alternative media type. This is explained in the section of RFC 2046 dealing with multipart/alternative.
8. Content-Description Header Field
The ability to associate some descriptive information with a given body is often desirable. For example, it may be useful to mark an "image" body as "a picture of the Space Shuttle Endeavor." Such text may be placed in the Content-Description header field. This header field is always optional.
\[
\text{description} := \text{"Content-Description" : " *text}
\]
The description is presumed to be given in the US-ASCII character set, although the mechanism specified in RFC 2047 may be used for non-US-ASCII Content-Description values.
9. Additional MIME Header Fields
Future documents may elect to define additional MIME header fields for various purposes. Any new header field that further describes the content of a message should begin with the string "Content-" to allow such fields which appear in a message header to be distinguished from ordinary RFC 822 message header fields.
\[
\text{MIME-extension-field} := \text{<Any RFC 822 header field which begins with the string "Content-"}>
\]
10. Summary
Using the MIME-Version, Content-Type, and Content-Transfer-Encoding header fields, it is possible to include, in a standardized way, arbitrary types of data with RFC 822 conformant mail messages. No restrictions imposed by either RFC 821 or RFC 822 are violated, and care has been taken to avoid problems caused by additional restrictions imposed by the characteristics of some Internet mail transport mechanisms (see RFC 2049).
The next document in this set, RFC 2046, specifies the initial set of media types that can be labelled and transported using these headers.
11. Security Considerations
Security issues are discussed in the second document in this set, RFC 2046.
12. Authors’ Addresses
For more information, the authors of this document are best contacted via Internet mail:
Ned Freed
Innosoft International, Inc.
1050 East Garvey Avenue South
West Covina, CA 91790
USA
Phone: +1 818 919 3600
Fax: +1 818 919 3614
EMail: ned@innosoft.com
Nathaniel S. Borenstein
First Virtual Holdings
25 Washington Avenue
Morristown, NJ 07960
USA
Phone: +1 201 540 8967
Fax: +1 201 993 3032
EMail: nsb@nsb.fv.com
MIME is a result of the work of the Internet Engineering Task Force Working Group on RFC 822 Extensions. The chairman of that group, Greg Vaudreuil, may be reached at:
Gregory M. Vaudreuil
Octel Network Services
17080 Dallas Parkway
Dallas, TX 75248-1905
USA
EMail: Greg.Vaudreuil@Octel.Com
Appendix A -- Collected Grammar
This appendix contains the complete BNF grammar for all the syntax specified by this document.
By itself, however, this grammar is incomplete. It refers by name to several syntax rules that are defined by RFC 822. Rather than reproduce those definitions here, and risk unintentional differences between the two, this document simply refers the reader to RFC 822 for the remaining definitions. Wherever a term is undefined, it refers to the RFC 822 definition.
attribute := token
; Matching of attributes
; is ALWAYS case-insensitive.
composite-type := "message" / "multipart" / extension-token
content := "Content-Type" ":" type "/" subtype
*(";" parameter)
; Matching of media type and subtype
; is ALWAYS case-insensitive.
description := "Content-Description" ":" *text
discrete-type := "text" / "image" / "audio" / "video" /
"application" / extension-token
encoding := "Content-Transfer-Encoding" ":" mechanism
entity-headers := [ content CRLF ]
[ encoding CRLF ]
[ id CRLF ]
[ description CRLF ]
*( MIME-extension-field CRLF )
extension-token := ietf-token / x-token
hex-octet := ";" 2(DIGIT / "A" / "B" / "C" / "D" / "E" / "F")
; Octet must be used for characters > 127, =,
; SPACES or TABs at the ends of lines, and is
; recommended for any character not listed in
; RFC 2049 as "mail-safe".
iana-token := <A publicly-defined extension token. Tokens
of this form must be registered with IANA
as specified in RFC 2048.>
ietf-token := <An extension token defined by a standards-track RFC and registered with IANA.>
id := "Content-ID" ":" msg-id
mechanism := "7bit" / "8bit" / "binary" /
"quoted-printable" / "base64" /
ietf-token / x-token
MIME-extension-field := <Any RFC 822 header field which begins with the string "Content-">
MIME-message-headers := entity-headers
fields
version CRLF
; The ordering of the header fields implied by this BNF
; definition should be ignored.
MIME-part-headers := entity-headers
[fields]
; Any field not beginning with "content-
; can have no defined meaning and may be ignored.
; The ordering of the header fields implied by this BNF
; definition should be ignored.
parameter := attribute "=" value
ptext := hex-octet / safe-char
qp-line := *(qp-segment transport-padding CRLF)
qp-part transport-padding
qp-part := qp-section
; Maximum length of 76 characters
qp-section := [* (ptext / SPACE / TAB) ptext]
qp-segment := qp-section *(SPACE / TAB) "="
; Maximum length of 76 characters
quoted-printable := qp-line *(CRLF qp-line)
safe-char := <any octet with decimal value of 33 through 60 inclusive, and 62 through 126>
; Characters not listed as "mail-safe" in
; RFC 2049 are also not recommended.
subtype := extension-token / iana-token
token := 1*<any (US-ASCII) CHAR except SPACE, CTLs, or tspecials>
transport-padding := *LWSP-char
; Composers MUST NOT generate
; non-zero length transport
; padding, but receivers MUST
; be able to handle padding
; added by message transports.
tspecials := "(" / ")" / "<" / ">" / ">" / ">
; Must be in quoted-string,
; to use within parameter values
type := discrete-type / composite-type
value := token / quoted-string
version := "MIME-Version" :: 1*DIGIT "." 1*DIGIT
x-token := <The two characters "X-" or "x-" followed, with
no intervening white space, by any token>
|
{"Source-Url": "https://tools.ietf.org/pdf/rfc2045.pdf", "len_cl100k_base": 13717, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 61245, "total-output-tokens": 15435, "length": "2e13", "weborganizer": {"__label__adult": 0.0003287792205810547, "__label__art_design": 0.0005898475646972656, "__label__crime_law": 0.0008311271667480469, "__label__education_jobs": 0.0015058517456054688, "__label__entertainment": 0.0003056526184082031, "__label__fashion_beauty": 0.00018608570098876953, "__label__finance_business": 0.0010251998901367188, "__label__food_dining": 0.00028634071350097656, "__label__games": 0.0008068084716796875, "__label__hardware": 0.00402069091796875, "__label__health": 0.00032401084899902344, "__label__history": 0.0005450248718261719, "__label__home_hobbies": 9.256601333618164e-05, "__label__industrial": 0.0006566047668457031, "__label__literature": 0.0007510185241699219, "__label__politics": 0.0004580020904541016, "__label__religion": 0.0004854202270507813, "__label__science_tech": 0.2459716796875, "__label__social_life": 0.00013387203216552734, "__label__software": 0.2230224609375, "__label__software_dev": 0.5166015625, "__label__sports_fitness": 0.0002930164337158203, "__label__transportation": 0.0006251335144042969, "__label__travel": 0.00022208690643310547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61461, 0.03143]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61461, 0.43612]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61461, 0.87377]], "google_gemma-3-12b-it_contains_pii": [[0, 1640, false], [1640, 4192, null], [4192, 6897, null], [6897, 9051, null], [9051, 10938, null], [10938, 13230, null], [13230, 14719, null], [14719, 16274, null], [16274, 18152, null], [18152, 20175, null], [20175, 22696, null], [22696, 24123, null], [24123, 26261, null], [26261, 28419, null], [28419, 30895, null], [30895, 33325, null], [33325, 35945, null], [35945, 38560, null], [38560, 40826, null], [40826, 43150, null], [43150, 45223, null], [45223, 47257, null], [47257, 48500, null], [48500, 50708, null], [50708, 53676, null], [53676, 55552, null], [55552, 57280, null], [57280, 58010, null], [58010, 59543, null], [59543, 60624, null], [60624, 61461, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1640, true], [1640, 4192, null], [4192, 6897, null], [6897, 9051, null], [9051, 10938, null], [10938, 13230, null], [13230, 14719, null], [14719, 16274, null], [16274, 18152, null], [18152, 20175, null], [20175, 22696, null], [22696, 24123, null], [24123, 26261, null], [26261, 28419, null], [28419, 30895, null], [30895, 33325, null], [33325, 35945, null], [35945, 38560, null], [38560, 40826, null], [40826, 43150, null], [43150, 45223, null], [45223, 47257, null], [47257, 48500, null], [48500, 50708, null], [50708, 53676, null], [53676, 55552, null], [55552, 57280, null], [57280, 58010, null], [58010, 59543, null], [59543, 60624, null], [60624, 61461, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61461, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61461, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61461, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61461, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61461, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61461, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61461, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61461, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61461, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61461, null]], "pdf_page_numbers": [[0, 1640, 1], [1640, 4192, 2], [4192, 6897, 3], [6897, 9051, 4], [9051, 10938, 5], [10938, 13230, 6], [13230, 14719, 7], [14719, 16274, 8], [16274, 18152, 9], [18152, 20175, 10], [20175, 22696, 11], [22696, 24123, 12], [24123, 26261, 13], [26261, 28419, 14], [28419, 30895, 15], [30895, 33325, 16], [33325, 35945, 17], [35945, 38560, 18], [38560, 40826, 19], [40826, 43150, 20], [43150, 45223, 21], [45223, 47257, 22], [47257, 48500, 23], [48500, 50708, 24], [50708, 53676, 25], [53676, 55552, 26], [55552, 57280, 27], [57280, 58010, 28], [58010, 59543, 29], [59543, 60624, 30], [60624, 61461, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61461, 0.04017]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
9611ace8e64918645b4a3edc6936705ce7f0d179
|
[REMOVED]
|
{"Source-Url": "http://staff.ustc.edu.cn/~cheneh/paper_pdf/2017/Haocheng-Wu-DASFAA.pdf", "len_cl100k_base": 12466, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 49914, "total-output-tokens": 14022, "length": "2e13", "weborganizer": {"__label__adult": 0.0006647109985351562, "__label__art_design": 0.001781463623046875, "__label__crime_law": 0.00098419189453125, "__label__education_jobs": 0.058441162109375, "__label__entertainment": 0.0007214546203613281, "__label__fashion_beauty": 0.0005340576171875, "__label__finance_business": 0.0018978118896484375, "__label__food_dining": 0.0006060600280761719, "__label__games": 0.003017425537109375, "__label__hardware": 0.0015478134155273438, "__label__health": 0.0011224746704101562, "__label__history": 0.0011348724365234375, "__label__home_hobbies": 0.00033092498779296875, "__label__industrial": 0.0005955696105957031, "__label__literature": 0.00434112548828125, "__label__politics": 0.0007166862487792969, "__label__religion": 0.0009522438049316406, "__label__science_tech": 0.2464599609375, "__label__social_life": 0.0010271072387695312, "__label__software": 0.25732421875, "__label__software_dev": 0.41455078125, "__label__sports_fitness": 0.00045990943908691406, "__label__transportation": 0.000499725341796875, "__label__travel": 0.0003495216369628906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48527, 0.04696]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48527, 0.19705]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48527, 0.89949]], "google_gemma-3-12b-it_contains_pii": [[0, 2653, false], [2653, 5982, null], [5982, 9164, null], [9164, 12217, null], [12217, 15564, null], [15564, 18444, null], [18444, 21704, null], [21704, 25250, null], [25250, 28660, null], [28660, 34184, null], [34184, 37526, null], [37526, 40112, null], [40112, 43474, null], [43474, 45740, null], [45740, 48527, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2653, true], [2653, 5982, null], [5982, 9164, null], [9164, 12217, null], [12217, 15564, null], [15564, 18444, null], [18444, 21704, null], [21704, 25250, null], [25250, 28660, null], [28660, 34184, null], [34184, 37526, null], [37526, 40112, null], [40112, 43474, null], [43474, 45740, null], [45740, 48527, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48527, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48527, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48527, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48527, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48527, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48527, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48527, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48527, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48527, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48527, null]], "pdf_page_numbers": [[0, 2653, 1], [2653, 5982, 2], [5982, 9164, 3], [9164, 12217, 4], [12217, 15564, 5], [15564, 18444, 6], [18444, 21704, 7], [21704, 25250, 8], [25250, 28660, 9], [28660, 34184, 10], [34184, 37526, 11], [37526, 40112, 12], [40112, 43474, 13], [43474, 45740, 14], [45740, 48527, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48527, 0.25726]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.